A simple RESTful API that analyzes strings, computes properties (length, palindrome check, unique characters, word count, character frequency, SHA-256 hash), stores them in a local JSON DB, and provides filtering and natural-language-like queries.
- Node.js (>=14)
- Express
- Clone:
git clone https://github.com/YOUR-USERNAME/string-analyzer-api.git
cd string-analyzer-apiInstall
npm install
Start
npm start
The server runs at http://localhost:3000 by default.
Analyze and store a string. Request body (JSON):
{ "value": "your string here" }Responses:
-
201 Created → returns stored object
-
409 Conflict → string already exists
-
400 Bad Request → missing "value"
-
422 Unprocessable Entity → non-string value
Retrieve the analyzed string by exact value (URL-encoded).
Get all strings (supports query filters):
-
is_palindrome (true/false)
-
min_length (int)
-
max_length (int)
-
word_count (int)
-
contains_character (single char)
Example: /strings?is_palindrome=true&min_length=3
Simple natural language filter parser. Example: /strings/filter-by-natural-language?query=all%20single%20word%20palindromic%20strings
Delete a stored string (204 No Content on success).
Local JSON file: data/strings.json. For production, swap to a real DB.
Railway or Heroku supported. No environment vars required for local test. If deploying, point to your repo and set PORT if needed.
Create:
curl -X POST http://localhost:3000/strings -H "Content-Type: application/json" -d '{"value":"racecar"}'
Get all:
curl "http://localhost:3000/strings?is_palindrome=true"
Natural language:
curl "http://localhost:3000/strings/filter-by-natural-language?query=all%20single%20word%20palindromic%20strings"
- Computes and stores all requested properties.
POST /stringsincludes 201, 409, 400, 422 responses.GET /strings/:value,GET /stringswith query filters,DELETE /strings/:valueimplemented.GET /strings/filter-by-natural-languageimplemented with a minimal parser.- Uses semantic response codes and JSON.
- Data stored in
data/strings.jsonfor simplicity; easy to swap for a DB.
- The
nlpParseris intentionally lightweight—it's rule-based to handle the example queries. If you need more complex parsing, I can integratecompromiseor another NLP library, but that adds dependencies. - For production: replace JSON file storage with a database (Postgres, MongoDB). For concurrency safety on multi-instance deployments, a DB is required.
- For hosting on Railway: create a new project, link GitHub repo, Railway will auto-deploy. No environment variables needed unless you change port/storage.