Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Auto moving to destination point #4

Closed
hongha1412 opened this issue Apr 19, 2023 · 6 comments
Closed

Auto moving to destination point #4

hongha1412 opened this issue Apr 19, 2023 · 6 comments

Comments

@hongha1412
Copy link

Instead of move 1 by 1 cell each call to openai api, please leave openai make decision where/which to move/action next for prevent 429 http status code of openai api (too much request). And then use auto move for put character to right position.

@dzoba
Copy link
Owner

dzoba commented Apr 20, 2023

If I understand your message, this is accomplished with the Navigate action. Did you see that?

@hongha1412
Copy link
Author

If I understand your message, this is accomplished with the Navigate action. Did you see that?

Yep, youre correct, but still facing with 429 http status code.

@dzoba
Copy link
Owner

dzoba commented Apr 21, 2023

Can you paste the full error?

@hongha1412
Copy link
Author

hongha1412 commented Apr 21, 2023

Of course, please take a look below log,
It occur after run for some minute

> gptrpg@1.0.0 start
> concurrently "npm:start --prefix agent" "npm:start --prefix ui-admin"

[start] 
[start] > agent@1.0.0 start  
[start] > nodemon index.js   
[start] 
[start] 
[start] > gptrpg@0.1.0 start 
[start] > react-scripts start
[start] 
[start] [nodemon] 2.0.22
[start] [nodemon] to restart at any time, enter `rs`
[start] [nodemon] watching path(s): *.*
[start] [nodemon] watching extensions: js,mjs,json
[start] [nodemon] starting `node index.js`
[start] (node:22668) ExperimentalWarning: Import assertions are not a stable feature of the JavaScript language. Avoid relying on their current behavior and syntax as those might change in a future version of Node.js.
[start] (Use `node --trace-warnings ...` to show where the warning was created)
[start] (node:22668) ExperimentalWarning: Importing JSON modules is an experimental feature and might change at any time
[start] Browserslist: caniuse-lite is outdated. Please run:
[start]   npx browserslist@latest --update-db
[start]   Why you should do it regularly: https://github.com/browserslist/browserslist#browsers-data-updating
[start] (node:21756) [DEP_WEBPACK_DEV_SERVER_ON_AFTER_SETUP_MIDDLEWARE] DeprecationWarning: 'onAfterSetupMiddleware' option is deprecated. Please use the 'setupMiddlewares' option.
[start] (Use `node --trace-deprecation ...` to show where the warning was created)
[start] (node:21756) [DEP_WEBPACK_DEV_SERVER_ON_BEFORE_SETUP_MIDDLEWARE] DeprecationWarning: 'onBeforeSetupMiddleware' option is deprecated. Please use the 'setupMiddlewares' option.
[start] Starting the development server...
[start]
[start] Compiled successfully!
[start]
[start] You can now view gptrpg in the browser.
[start]
[start]   Local:            http://localhost:3000
[start]   On Your Network:  http://192.168.10.116:3000
[start]
[start] Note that the development build is not optimized.
[start] To create a production build, use npm run build.
[start]
[start] webpack compiled successfully
[start] Creating Agent: agent1
[start] requestNextMove message for agent: agent1
[start] OpenAI response {
[start]   "action": {
[start]     "type": "wait"
[start]   }
[start] }
[start] requestNextMove message for agent: agent1
[start] OpenAI response {
[start]   action: {
[start]     type: "move",
[start]     direction: "right"
[start]   }
[start] }
[start] requestNextMove message for agent: agent1
[start] OpenAI response {
[start]   "action": {
[start]     "type": "move",
[start]     "direction": "right"
[start]   }
[start] }
[start] requestNextMove message for agent: agent1
[start] OpenAI response {
[start]   action: {
[start]     type: "move",
[start]     direction: "up"
[start]   }
[start] }
[start] requestNextMove message for agent: agent1
[start] Error processing GPT-3 response: Error: Request failed with status code 429
[start]     at createError (F:\WORK\openai\gptrpg\agent\node_modules\axios\lib\core\createError.js:16:15)
[start]     at settle (F:\WORK\openai\gptrpg\agent\node_modules\axios\lib\core\settle.js:17:12)
[start]     at IncomingMessage.handleStreamEnd (F:\WORK\openai\gptrpg\agent\node_modules\axios\lib\adapters\http.js:322:11)
[start]     at IncomingMessage.emit (node:events:525:35)
[start]     at endReadableNT (node:internal/streams/readable:1359:12)
[start]     at process.processTicksAndRejections (node:internal/process/task_queues:82:21) {
[start]   config: {
[start]     transitional: {
[start]       silentJSONParsing: true,
[start]       forcedJSONParsing: true,
[start]       clarifyTimeoutError: false
[start]     },
[start]     adapter: [Function: httpAdapter],
[start]     transformRequest: [ [Function: transformRequest] ],
[start]     transformResponse: [ [Function: transformResponse] ],
[start]     timeout: 0,
[start]     xsrfCookieName: 'XSRF-TOKEN',
[start]     xsrfHeaderName: 'X-XSRF-TOKEN',
[start]     maxContentLength: -1,
[start]     maxBodyLength: -1,
[start]     validateStatus: [Function: validateStatus],
[start]     headers: {
[start]       Accept: 'application/json, text/plain, */*',
[start]       'Content-Type': 'application/json',
[start]       'User-Agent': 'OpenAI/NodeJS/3.2.1',
[start]       Authorization: 'Bearer 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',
[start]       'Content-Length': 1284
[start]     },
[start]     method: 'post',
[start]     data: '{"model":"gpt-3.5-turbo","messages":[{"role":"user","content":"# Introduction\\n\\n      You are acting as an agent living in a simulated 2 dimensional universe. Your goal is to exist as best as you see fit an
d meet your needs.\\n      \\n      # Capabilities\\n      \\n      You have a limited set of capabilities. They are listed below:\\n      \\n      * Move (up, down, left, right)\\n      * Wait\\n      * Navigate (to an x,y coor
dinate)\\n      * Sleep\\n\\n      # Responses\\n      \\n      You must supply your responses in the form of valid JSON objects.  Your responses will specify which of the above actions you intend to take.  The following is an e
xample of a valid response:\\n      \\n      {\\n        action: {\\n          type: \\"move\\",\\n          direction: \\"up\\" | \\"down\\" | \\"left\\" | \\"right\\"\\n        }\\n      }\\n      \\n      # Perceptions\\n    
  \\n      You will have access to data to help you make your decisions on what to do next.\\n      \\n      For now, this is the information you have access to:\\n\\n      Position: \\n      {\\"x\\":9,\\"y\\":5}\\n\\n      Sur
roundings:\\n      {\\"up\\":\\"wall\\",\\"down\\":\\"walkable\\",\\"left\\":\\"walkable\\",\\"right\\":\\"walkable\\"}\\n\\n      Sleepiness:\\n      5 out of 10\\n\\n      The JSON response indicating the next move is.\\n     
 "}]}',
[start]     url: 'https://api.openai.com/v1/chat/completions'
[start]   },
[start]   request: <ref *1> ClientRequest {
[start]     _events: [Object: null prototype] {
[start]       abort: [Function (anonymous)],
[start]       aborted: [Function (anonymous)],
[start]       connect: [Function (anonymous)],
[start]       error: [Function (anonymous)],
[start]       socket: [Function (anonymous)],
[start]       timeout: [Function (anonymous)],
[start]       finish: [Function: requestOnFinish]
[start]     },
[start]     _eventsCount: 7,
[start]     _maxListeners: undefined,
[start]     outputData: [],
[start]     outputSize: 0,
[start]     writable: true,
[start]     destroyed: false,
[start]     _last: true,
[start]     chunkedEncoding: false,
[start]     shouldKeepAlive: false,
[start]     maxRequestsOnConnectionReached: false,
[start]     _defaultKeepAlive: true,
[start]     useChunkedEncodingByDefault: true,
[start]     sendDate: false,
[start]     _removedConnection: false,
[start]     _removedContLen: false,
[start]     _removedTE: false,
[start]     strictContentLength: false,
[start]     _contentLength: 1284,
[start]     _hasBody: true,
[start]     _trailer: '',
[start]     finished: true,
[start]     _headerSent: true,
[start]     _closed: false,
[start]     socket: TLSSocket {
[start]       _tlsOptions: [Object],
[start]       _secureEstablished: true,
[start]       _securePending: false,
[start]       _newSessionPending: false,
[start]       _controlReleased: true,
[start]       secureConnecting: false,
[start]       _SNICallback: null,
[start]       servername: 'api.openai.com',
[start]       alpnProtocol: false,
[start]       authorized: true,
[start]       authorizationError: null,
[start]       encrypted: true,
[start]       _events: [Object: null prototype],
[start]       _eventsCount: 10,
[start]       connecting: false,
[start]       _hadError: false,
[start]       _parent: null,
[start]       _host: 'api.openai.com',
[start]       _closeAfterHandlingError: false,
[start]       _readableState: [ReadableState],
[start]       _maxListeners: undefined,
[start]       _writableState: [WritableState],
[start]       allowHalfOpen: false,
[start]       _sockname: null,
[start]       _pendingData: null,
[start]       _pendingEncoding: '',
[start]       server: undefined,
[start]       _server: null,
[start]       ssl: [TLSWrap],
[start]       _requestCert: true,
[start]       _rejectUnauthorized: true,
[start]       parser: null,
[start]       _httpMessage: [Circular *1],
[start]       [Symbol(res)]: [TLSWrap],
[start]       [Symbol(verified)]: true,
[start]       [Symbol(pendingSession)]: null,
[start]       [Symbol(async_id_symbol)]: 170,
[start]       [Symbol(kHandle)]: [TLSWrap],
[start]       [Symbol(lastWriteQueueSize)]: 0,
[start]       [Symbol(timeout)]: null,
[start]       [Symbol(kBuffer)]: null,
[start]       [Symbol(kBufferCb)]: null,
[start]       [Symbol(kBufferGen)]: null,
[start]       [Symbol(kCapture)]: false,
[start]       [Symbol(kSetNoDelay)]: false,
[start]       [Symbol(kSetKeepAlive)]: true,
[start]       [Symbol(kSetKeepAliveInitialDelay)]: 60,
[start]       [Symbol(kBytesRead)]: 0,
[start]       [Symbol(kBytesWritten)]: 0,
[start]       [Symbol(connect-options)]: [Object]
[start]     },
[start]     _header: 'POST /v1/chat/completions HTTP/1.1\r\n' +
[start]       'Accept: application/json, text/plain, */*\r\n' +
[start]       'Content-Type: application/json\r\n' +
[start]       'User-Agent: OpenAI/NodeJS/3.2.1\r\n' +
[start]       'Authorization: Bearer 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\r\n' +
[start]       'Content-Length: 1284\r\n' +
[start]       'Host: api.openai.com\r\n' +
[start]       'Connection: close\r\n' +
[start]       '\r\n',
[start]     _keepAliveTimeout: 0,
[start]     _onPendingData: [Function: nop],
[start]     agent: Agent {
[start]       _events: [Object: null prototype],
[start]       _eventsCount: 2,
[start]       _maxListeners: undefined,
[start]       defaultPort: 443,
[start]       protocol: 'https:',
[start]       options: [Object: null prototype],
[start]       requests: [Object: null prototype] {},
[start]       sockets: [Object: null prototype],
[start]       freeSockets: [Object: null prototype] {},
[start]       keepAliveMsecs: 1000,
[start]       keepAlive: false,
[start]       maxSockets: Infinity,
[start]       maxFreeSockets: 256,
[start]       scheduling: 'lifo',
[start]       maxTotalSockets: Infinity,
[start]       totalSocketCount: 1,
[start]       maxCachedSessions: 100,
[start]       _sessionCache: [Object],
[start]       [Symbol(kCapture)]: false
[start]     },
[start]     socketPath: undefined,
[start]     method: 'POST',
[start]     maxHeaderSize: undefined,
[start]     insecureHTTPParser: undefined,
[start]     joinDuplicateHeaders: undefined,
[start]     path: '/v1/chat/completions',
[start]     _ended: true,
[start]     res: IncomingMessage {
[start]       _readableState: [ReadableState],
[start]       _events: [Object: null prototype],
[start]       _eventsCount: 4,
[start]       _maxListeners: undefined,
[start]       socket: [TLSSocket],
[start]       httpVersionMajor: 1,
[start]       httpVersionMinor: 1,
[start]       httpVersion: '1.1',
[start]       complete: true,
[start]       rawHeaders: [Array],
[start]       rawTrailers: [],
[start]       joinDuplicateHeaders: undefined,
[start]       aborted: false,
[start]       upgrade: false,
[start]       url: '',
[start]       method: null,
[start]       statusCode: 429,
[start]       statusMessage: 'Too Many Requests',
[start]       client: [TLSSocket],
[start]       _consuming: false,
[start]       _dumped: false,
[start]       req: [Circular *1],
[start]       responseUrl: 'https://api.openai.com/v1/chat/completions',
[start]       redirects: [],
[start]       [Symbol(kCapture)]: false,
[start]       [Symbol(kHeaders)]: [Object],
[start]       [Symbol(kHeadersCount)]: 28,
[start]       [Symbol(kTrailers)]: null,
[start]       [Symbol(kTrailersCount)]: 0
[start]     },
[start]     aborted: false,
[start]     timeoutCb: null,
[start]     upgradeOrConnect: false,
[start]     parser: null,
[start]     maxHeadersCount: null,
[start]     reusedSocket: false,
[start]     host: 'api.openai.com',
[start]     protocol: 'https:',
[start]     _redirectable: Writable {
[start]       _writableState: [WritableState],
[start]       _events: [Object: null prototype],
[start]       _eventsCount: 3,
[start]       _maxListeners: undefined,
[start]       _options: [Object],
[start]       _ended: true,
[start]       _ending: true,
[start]       _redirectCount: 0,
[start]       _redirects: [],
[start]       _requestBodyLength: 1284,
[start]       _requestBodyBuffers: [],
[start]       _onNativeResponse: [Function (anonymous)],
[start]       _currentRequest: [Circular *1],
[start]       _currentUrl: 'https://api.openai.com/v1/chat/completions',
[start]       [Symbol(kCapture)]: false
[start]     },
[start]     [Symbol(kCapture)]: false,
[start]     [Symbol(kBytesWritten)]: 0,
[start]     [Symbol(kNeedDrain)]: false,
[start]     [Symbol(corked)]: 0,
[start]     [Symbol(kOutHeaders)]: [Object: null prototype] {
[start]       accept: [Array],
[start]       'content-type': [Array],
[start]       'user-agent': [Array],
[start]       authorization: [Array],
[start]       'content-length': [Array],
[start]       host: [Array]
[start]     },
[start]     [Symbol(errored)]: null,
[start]     [Symbol(kUniqueHeaders)]: null
[start]   },
[start]   response: {
[start]     status: 429,
[start]     statusText: 'Too Many Requests',
[start]     headers: {
[start]       date: 'Fri, 21 Apr 2023 17:38:03 GMT',
[start]       'content-type': 'application/json; charset=utf-8',
[start]       'content-length': '478',
[start]       connection: 'close',
[start]       vary: 'Origin',
[start]       'x-ratelimit-limit-requests': '3',
[start]       'x-ratelimit-remaining-requests': '0',
[start]       'x-ratelimit-reset-requests': '52.433s',
[start]       'x-request-id': '1aefda650371d2cdc95b14071db79956',
[start]       'strict-transport-security': 'max-age=15724800; includeSubDomains',
[start]       'cf-cache-status': 'DYNAMIC',
[start]       server: 'cloudflare',
[start]       'cf-ray': '7bb76600484c6ba8-SIN',
[start]       'alt-svc': 'h3=":443"; ma=86400, h3-29=":443"; ma=86400'
[start]     },
[start]     config: {
[start]       transitional: [Object],
[start]       adapter: [Function: httpAdapter],
[start]       transformRequest: [Array],
[start]       transformResponse: [Array],
[start]       timeout: 0,
[start]       xsrfCookieName: 'XSRF-TOKEN',
[start]       xsrfHeaderName: 'X-XSRF-TOKEN',
[start]       maxContentLength: -1,
[start]       maxBodyLength: -1,
[start]       validateStatus: [Function: validateStatus],
[start]       headers: [Object],
[start]       method: 'post',
[start]       data: '{"model":"gpt-3.5-turbo","messages":[{"role":"user","content":"# Introduction\\n\\n      You are acting as an agent living in a simulated 2 dimensional universe. Your goal is to exist as best as you see fit 
and meet your needs.\\n      \\n      # Capabilities\\n      \\n      You have a limited set of capabilities. They are listed below:\\n      \\n      * Move (up, down, left, right)\\n      * Wait\\n      * Navigate (to an x,y co
ordinate)\\n      * Sleep\\n\\n      # Responses\\n      \\n      You must supply your responses in the form of valid JSON objects.  Your responses will specify which of the above actions you intend to take.  The following is an
 example of a valid response:\\n      \\n      {\\n        action: {\\n          type: \\"move\\",\\n          direction: \\"up\\" | \\"down\\" | \\"left\\" | \\"right\\"\\n        }\\n      }\\n      \\n      # Perceptions\\n  
    \\n      You will have access to data to help you make your decisions on what to do next.\\n      \\n      For now, this is the information you have access to:\\n\\n      Position: \\n      {\\"x\\":9,\\"y\\":5}\\n\\n      S
urroundings:\\n      {\\"up\\":\\"wall\\",\\"down\\":\\"walkable\\",\\"left\\":\\"walkable\\",\\"right\\":\\"walkable\\"}\\n\\n      Sleepiness:\\n      5 out of 10\\n\\n      The JSON response indicating the next move is.\\n   
   "}]}',
[start]       url: 'https://api.openai.com/v1/chat/completions'
[start]     },
[start]     request: <ref *1> ClientRequest {
[start]       _events: [Object: null prototype],
[start]       _eventsCount: 7,
[start]       _maxListeners: undefined,
[start]       outputData: [],
[start]       outputSize: 0,
[start]       writable: true,
[start]       destroyed: false,
[start]       _last: true,
[start]       chunkedEncoding: false,
[start]       shouldKeepAlive: false,
[start]       maxRequestsOnConnectionReached: false,
[start]       _defaultKeepAlive: true,
[start]       useChunkedEncodingByDefault: true,
[start]       sendDate: false,
[start]       _removedConnection: false,
[start]       _removedContLen: false,
[start]       _removedTE: false,
[start]       strictContentLength: false,
[start]       _contentLength: 1284,
[start]       _hasBody: true,
[start]       _trailer: '',
[start]       finished: true,
[start]       _headerSent: true,
[start]       _closed: false,
[start]       socket: [TLSSocket],
[start]       _header: 'POST /v1/chat/completions HTTP/1.1\r\n' +
[start]         'Accept: application/json, text/plain, */*\r\n' +
[start]         'Content-Type: application/json\r\n' +
[start]         'User-Agent: OpenAI/NodeJS/3.2.1\r\n' +
[start]         'Authorization: Bearer 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\r\n' +
[start]         'Content-Length: 1284\r\n' +
[start]         'Host: api.openai.com\r\n' +
[start]         'Connection: close\r\n' +
[start]         '\r\n',
[start]       _keepAliveTimeout: 0,
[start]       _onPendingData: [Function: nop],
[start]       agent: [Agent],
[start]       socketPath: undefined,
[start]       method: 'POST',
[start]       maxHeaderSize: undefined,
[start]       insecureHTTPParser: undefined,
[start]       joinDuplicateHeaders: undefined,
[start]       path: '/v1/chat/completions',
[start]       _ended: true,
[start]       res: [IncomingMessage],
[start]       aborted: false,
[start]       timeoutCb: null,
[start]       upgradeOrConnect: false,
[start]       parser: null,
[start]       maxHeadersCount: null,
[start]       reusedSocket: false,
[start]       host: 'api.openai.com',
[start]       protocol: 'https:',
[start]       _redirectable: [Writable],
[start]       [Symbol(kCapture)]: false,
[start]       [Symbol(kBytesWritten)]: 0,
[start]       [Symbol(kNeedDrain)]: false,
[start]       [Symbol(corked)]: 0,
[start]       [Symbol(kOutHeaders)]: [Object: null prototype],
[start]       [Symbol(errored)]: null,
[start]       [Symbol(kUniqueHeaders)]: null
[start]     },
[start]     data: { error: [Object] }
[start]   },
[start]   isAxiosError: true,
[start]   toJSON: [Function: toJSON]
[start] }
[start] Terminate batch job (Y/N)? Terminate batch job (Y/N)? Terminate batch job (Y/N)? npm run start --prefix agent exited with code 1
[start] npm run start --prefix ui-admin exited with code 1

Process finished with exit code 1

@Distil62
Copy link

I think it's because your openai api key reach his usage limit.
You can check this at https://platform.openai.com/account/usage

@dzoba
Copy link
Owner

dzoba commented May 2, 2023

This is not a GPTRPG bug, closing.

@dzoba dzoba closed this as completed May 2, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants