New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
OpenAI API Error: 400 - Bad Request #137
Comments
Hmm it's a server-side error so it could be a problem with OpenAI. Perhaps it's because we changed the way we count tokens: https://community.openai.com/t/error-retrieving-completions-400-bad-request/34004/6 Are your diffs really large? |
I have exactly the same Problem, but I always get this 400 Error. |
It would be helpful if you can share more info. What diff are you sending? |
No, not really, the size of the diff doesn't make a difference, the error occurs on small and big diffs. |
Can you provide it? I can't do much with how little information is provided right now. |
Just chiming in, I'm getting |
i have same issue |
I tested it, and it probably happens with big diffs, where it doesn't show the "it's a big diff" feedback message. Probably is a good idea to have an explainer message on this 400 error, such as |
EDIT: after > |
@Mobilpadde 404 is a completely different error and a more problematic one. It means the server you're reaching is returning a "Not found". Do you have some sort of network configuration (e.g. VPN)? For anyone else experiencing a 400 error, an example if the diff you're sending would be very helpful so I can reproduce the problem on my end. |
I am experiencing this as well. In my case, I am consistently experiencing 400's on this: diff --git a/tracabilite.ipynb b/tracabilite.ipynb
new file mode 100644
index 0000000..6301d1f
--- /dev/null
+++ b/tracabilite.ipynb
@@ -0,0 +1,121 @@
+{
+ "cells": [
+ {
+ "cell_type": "markdown",
+ "source": [
+ "# Script utilitaire pour la traçabilité des contenants Berny"
+ ],
+ "metadata": {
+ "collapsed": false
+ }
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 30,
+ "outputs": [],
+ "source": [
+ "file_items = \"export_public_Item_2023_03_08_16_03_42_201.csv\"\n",
+ "file_cleaning = \"export_public_Cleaning_2023_03_08_16_43_10_595.csv\"\n",
+ "file_delivery = \"export_public_Delivery_2023_03_08_16_43_24_349.csv\"\n",
+ "file_returns = \"export_public_Delivery_back_2023_03_08_16_43_32_493.csv\""
+ ],
+ "metadata": {
+ "collapsed": false
+ }
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 31,
+ "outputs": [],
+ "source": [
+ "import csv\n",
+ "import pandas as pd\n",
+ "import numpy as np\n",
+ "import matplotlib.pyplot as plt\n",
+ "\n",
+ "items = pd.read_csv(file_items, sep=\",\", encoding=\"utf-8\")\n",
+ "cleanings = pd.read_csv(file_cleaning, sep=\",\", encoding=\"utf-8\")\n",
+ "deliveries = pd.read_csv(file_delivery, sep=\",\", encoding=\"utf-8\")\n",
+ "returns = pd.read_csv(file_returns, sep=\",\", encoding=\"utf-8\")"
+ ],
+ "metadata": {
+ "collapsed": false
+ }
+ },
+ {
+ "cell_type": "markdown",
+ "source": [
+ "Items: ['id', 'size', 'arrival_date', 'arrival_country', 'in_transit', 'type', 'is_defunct']\n",
+ "Cleanings: ['item_id', 'created_date', 'cleaning_date', 'cleaner', 'id']\n",
+ "Deliveries: ['item_id', 'transportedIn', 'createdDate', 'delivery_date', 'id', 'shop']\n",
+ "Returns: ['is_notified', 'is_unsold', 'actor', 'item_id', 'date', 'lot', 'id', 'shop']"
+ ],
+ "metadata": {
+ "collapsed": false
+ }
+ },
+ {
+ "cell_type": "markdown",
+ "source": [
+ "# 1. Quels sont les contenants qui ont été utilisés au moins une fois ?"
+ ],
+ "metadata": {
+ "collapsed": false
+ }
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 32,
+ "outputs": [],
+ "source": [
+ "items_cleaned = cleanings['item_id'].unique()\n",
+ "items_delivered = deliveries['item_id'].unique()\n",
+ "items_returned = returns['item_id'].unique()\n",
+ "items_used = np.unique(np.concatenate((items_cleaned, items_delivered, items_returned)))"
+ ],
+ "metadata": {
+ "collapsed": false
+ }
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 33,
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "Nombre de contenants utilisés: 40857\n"
+ ]
+ }
+ ],
+ "source": [
+ "print(\"Nombre de contenants utilisés: \" + str(len(items_used)))"
+ ],
+ "metadata": {
+ "collapsed": false
+ }
+ }
+ ],
+ "metadata": {
+ "kernelspec": {
+ "display_name": "Python 3",
+ "language": "python",
+ "name": "python3"
+ },
+ "language_info": {
+ "codemirror_mode": {
+ "name": "ipython",
+ "version": 2
+ },
+ "file_extension": ".py",
+ "mimetype": "text/x-python",
+ "name": "python",
+ "nbconvert_exporter": "python",
+ "pygments_lexer": "ipython2",
+ "version": "2.7.6"
+ }
+ },
+ "nbformat": 4,
+ "nbformat_minor": 0
+} |
It's probably cuz it's a big diff and the package is not throwing the correct message. Try using |
I'm using the previous version for now to get this working. |
@edu-hilario I don't know if this is any indication of how it should perform, but trying the prompt in the playground with that diff didn't give me a length error, it worked just fine. If I'm not mistaken, the token limit should be the same there, right? |
same here, @1.2.0 does work but newer is broken |
I was finally able to reproduce this. Fix coming soon via #147 |
|
Bug description
OpenAI API Error: 400 - Bad Request
This one started to happen as soon as I updated from
1.2.0
to1.3.0
. It is a weird one because it happens some of the times, but not always, so reproducing it is just luck. I even changed the API key, but that doesn't seem to be the issue.aicommits version
v1.3.0
Environment
Can you contribute a fix?
The text was updated successfully, but these errors were encountered: