A template repository for the Typescript Backend Server powered by these libraries:
- NestJS: NodeJS Backend Framework
@nestia/core
: validation decorators 20,000x faster thanclass-validator
@nestia/sdk
: SDK and Swagger Documents generator
- TypeORM and
safe-typeorm
: Helper of TypeORM in the compilation level
Also, I've prepared lots of some backend projects based on this template project. Reading this README.md document and traveling below example projects, you may understand how to develop the TypeScript backend server with the nestia and safe-typeorm.
- samchon/fake-iamport-server: Fake iamport server, but real SDK
- samchon/fake-toss-payments-server: Fake toss-payments server, but real SDK
If you wanna new type of example backend project or you've something to ask me about the TypeScript backend server development, don't mine anything and just write an issue for help. I or another developers interested in this project will support you.
Also, if you've developed a TypeScript backend server and it seems like that its quality is enough good to be a good example for the backend programming learners, please write an issue or send a pull request about the project.
Transform this template project to be yours.
When you've created a new backend project through this template project, you can specialize it to be suitable for you by changing some words. Replace below words through IDE specific function like Edit > Replace in Files
(Ctrl + Shift + H), who've been supported by the VSCode.
Before | After |
---|---|
ORGANIZATION | Your account or corporation name |
PROJECT | Your own project name |
AUTHOR | Author name |
DB_NAME | Database to connnect |
DB_SCHEMA | Database schema to use |
DB_ACCOUNT | Database account to use, not root account |
https://github.com/samchon/backend | Your repository URL |
After those replacements, you should specialize the src/Configuration.ts
, .github/workflows/build.yml files. Open those files and change constant values of these files to be suitable for your project. Also, open markdown files like this README.md and write your specific project story. Below is list of the markdown files.
- .github/ISSUE_TEMPLATE/BUG_REPORT.md
- .github/ISSUE_TEMPLATE/FEATURE_REQUEST.md
- .github/ISSUE_TEMPLATE/QUESTION.md
- .github/PULL_REQUEST_TEMPLATE.md
- README.md
- CODE_OF_CONNDUCT.md
- CONTRIBUTING.md
- INTRASTRUCTURE.md
- LICENSE
This template project has categorized directories like below.
As you can see from the below, all of the TypeScript source files are placed into the src directory. When you build the TypeScript source files, compiled files would be placed into the bin
directory following the tsconfig.json configuration. Otherwise you build client SDK library or ORM models for the private npm module publishing, their compiled files would be placed into the packages directory.
If you want to customize configurations of the Github Action or debugging who can be started by the pressing the F5
key, edit the .github/workflows/build.yml or .vscode/launch.json file. Default of their configurations are using the 3.3. Test Automation Program without any special argument.
When you're planning to deploy the backend server, read the INFRASTRUCTURE.md file and follow the steps. Following the steps, you may run some executable programs of the src/executable directory.
- .github/workflows/build.yml: Configuration file of the Github Action
- .vscode/launch.json: Configuration for debugging
- packages/: Packages to publish as private npm modules
- packages/api/: Client SDK library for the client developers
- packages/models/: ORM library for the DB developers
- src/: TypeScript Source directory
- src/api/: Client SDK that would be published to the
@ORGANIZATION/PROJECT-api
- src/api/functional/: API functions generated by the
nestia
- src/api/structures/: DTO structures
- src/api/functional/: API functions generated by the
- src/controllers/: Controller classes of the Main Program
- src/executable/: Executable programs
- backend server itself
- update program for the user
- updator program in the server side
- src/models/: ORM Models
- src/providers/: Conversion between ORM Models and DTO Structures
- src/test/: Test Automation Program
- src/api/: Client SDK that would be published to the
- INFRASTRUCTURE.md: How to deploy the backend server on the cloud like AWS
- package.json: NPM configuration
- tsconfig.json: TypeScript configuration for the Main Program
- tsconfig.api.json: TypeScript configuration for the client SDK
- tsconfig.models.json: TypeScript configuration for the ORM Models package
This backend server has implemented through TypeScript and it runs on the NodeJS. Therefore, to mount this backend server on your local machine, you've to install the NodeJS.
Also as you can see from the package.json file, this project requires the private npm module @ORGANIZATION
, provided from the Github. Therefore, to develop this backend server, you've configure the .npmrc
file. Open the below link and complete the configuration.
This backend server has adopted PostgreSQL as principle DB.
Therefore, to mount this backend server on your local machine, you've to install the PostgreSQL v14. Also, you've to install the StackBuilder and PostGIS, at the same time. Click the below link and install those PostgreSQL, StackBuilder and PostGIS.
- https://www.enterprisedb.com/downloads/postgres-postgresql-downloads
- https://postgis.net/workshops/postgis-intro/installation.html
When the installation has been finnished, you'd better to configure bin
directory of the PostgreSQL as a environment variable PATH. If your operating system is the Windows, the path may be C:\Program Files\PostgreSQL\14\bin
. Otherwise you're using the MacOS, it would be the /Applications/Postgres.app/Contents/MacOS/bin
.
After the environmenta variable PATH configuration, connect to the PostgreSQL terminal and create each DB_NAME
database and DB_SCHEMA
schema. Also, create two accounts DB_ACCOUNT_w
and DB_ACCOUNT_r
, grant writable and readonly privileges to them.
Anyway, you can replace below SQL scripts by running the npm run schema <account> <password>
. Running the npm run schema command, replace the <account>
and <password>
words to the root account of the local PostgreSQL server and its password.
-- CREATE USER
CREATE USER "DB_ACCOUNT_w" WITH ENCRYPTED PASSWORD 'your_password';
GRANT "DB_ACCOUNT_w" TO postgres;
-- CREATE DB & SCHEMA
CREATE DATABASE "DB_NAME" OWNER "DB_ACCOUNT_w";
\connect "DB_NAME";
CREATE SCHEMA "DB_SCHEMA" AUTHORIZATION "DB_ACCOUNT_w";
GRANT ALL PRIVILEGES ON ALL TABLES IN SCHEMA "DB_SCHEMA" TO "DB_ACCOUNT_w";
-- READABLE ACCOUNT
CREATE USER "DB_ACCOUNT_r" WITH ENCRYPTED PASSWORD 'your_password';
GRANT CONNECT ON DATABASE "DB_NAME" TO "DB_ACCOUNT_r";
GRANT USAGE ON SCHEMA "DB_SCHEMA" TO "DB_ACCOUNT_r";
GRANT SELECT ON ALL TABLES IN SCHEMA "DB_SCHEMA" TO "DB_ACCOUNT_r";
From now on, you can start the backend server development, right now.
Just download this project through the git clone command and install dependencies by the npm install command. After those preparations, you can start the development by typing the npm run dev
command.
# CLONE REPOSITORY
git clone ${REPOSITORY}
cd backend
# INSTALL DEPENDENCIES
npm install
# START DEVELOPMENT (tsc --watch)
npm run dev
When those installations have been all completed, you can mount the basic data up or start the local backend server by typing below commands.
At first, npm run setup
is a command seed the initial data. Range of the initial data means that minimum data that is required for running the local backend server. Therefore, to mount the backend server up, you've to run the npm run setup
command, at least.
At second, npm run test
is a command running the test automation program. The test automation program not only seeds the initial data, but also generates sample data during the testing. Also, you've to know that, whenever run the npm run test
command, the local DB would be reset. Therefore, you've consider it carefully, whenever calling the npm run test
command.
# Seed initial data
# minimum data to running the local backend server
npm run setup
# Run test automation program
# seed not only initial data, but also sample data
# it resets the local DB
npm run test
# Start the local backend server
npm run start local
# Stop the local backend server
npm run stop
If you want to add a new feature or update ordinary thing in the API level, you should write the code down to the matched API controller, who is stored in the src/controllers directory as the Main Program.
However, @samchon does not recommend to writing code down into the Main Program first, without any consideration. Instead, @samchon recommends to declare the definition first and implement the Main Program later.
Therefore, if you want to add a new feature in the API level, define the matched data entity in the src/models and src/api/structures directories. After the data entity definition, declare function header in the matched API controller class in the src/controllers. Note that, it's only the declaration, header only, not meaning to implement the function body.
After those declarations, build the client SDK through the npm run build:api
command and implement the Test Automation Program using the SDK with use case scenarios. Development of the Main Program should be started after those preparations are all being ready. Of course, the Main Program can be verified with the pre-developed Test Automation Program in everytime.
- Declare data entity
- Declare API function header
- Build the client SDK
- Implement the Test Automation Program
- Develop the Main Program
- Validate the Main Program through the Test Automation Program
- Deploy to the Dev and Real servers.
@ORGANIZATION/PROJECT
provides SDK (Software Development Kit) for convenience.
For the client developers who are connecting to this backend server, @ORGANIZATION/PROJECT
provides not API documents like the Swagger, but provides the API interaction library, one of the typical SDK (Software Development Kit) for the convenience.
With the SDK, client developers never need to re-define the duplicated API interfaces. Just utilize the provided interfaces and asynchronous functions defined in the SDK. It would be much convenient than any other Rest API solutions.
To build the SDK, just type the npm run build:api
command. The SDK would be generated by nestia
, by analyzing source code of the controller classes in the compilation level, automatically. After the SDK building, you can publish the SDK through the npm run package:api
command.
# BUILD SDK AND PUBLISH IT
npm run build:api
npm run package:api
# BUILDING SWAGGER IS ALSO POSSIBLE,
# BUT NOT RECOMMENDED
npm run build:swagger
When the SDK has been published, client programmers can interact with this backend server very easily. Just let them to install the SDK and call the SDK functions with the await
symbol like below.
import api from "@samchon/bbs-api";
import { IBbsCitizen } from "@samchon/bbs-api/lib/structures/bbs/actors/IBbsCitizen";
import { IBbsQuestionArticle } from "@samchon/bbs-api/lib/structures/bbs/articles/IBbsQuestionArticle";
import { IBbsSection } from "@samchon/bbs-api/lib/api/structures/bbs/systematic/IBbsSection";
async function main(): Promise<void>
{
//----
// PREPARATIONS
//----
// CONNECTION INFO
const connection: api.IConnection = {
host: "http://127.0.0.1:37001",
password: {
key: "pJXhbHlYfzkC1CBK8R67faaBgJWB9Myu",
iv: "IXJBt4MflFxvxKkn"
}
};
// ISSUE A CUSTOMER ACCOUNT
const customer: IBbsCustomer = await api.functional.bbs.customers.authenticate.issue
(
connection,
{
href: window.location.href,
referrer: window.document.referrer
}
);
// ACTIVATE THE CUSTOMER
customer.citizen = await api.functional.bbs.customers.authenticate.activate
(
connection,
{
name: "Jeongho Nam",
mobile: "821036270016"
}
);
//----
// WRITE A QUESTION ARTICLE
//----
// FIND TARGET SECTION
const sectionList: IBbsSection[] = await api.functional.bbs.customers.systematic.sections.index
(
connection
);
const section: IBbsSection = sectionList.find(section => section.type === "qna")!;
// PREPARE INPUT DATA
const input: IBbsQuestionArticle.IStore = {
title: "Some Question Title",
body: "Some Question Body Content...",
files: []
};
// DO WRITE
const question: IBbsQuestionArticle = await api.functional.bbs.customers.articles.qna.store
(
connection,
section.code,
input
);
console.log(question);
}
TDD (Test Driven Development)
After the Definition and client SDK generation, you've to design the use-case scenario and implement a test automation program who represents the use-case scenario and guarantees the Main Program.
To add a new test function in the Test Automation Program, create a new TS file under the src/test/features directory following the below category and implement the test scenario function with representative function name and export
symbol. I think many all of the ordinary files wrote in the src/test/features directory would be good sample for you. Therefore, I will not describe how the make the test function detaily.
- src/test/features/api
- About the client SDK that would be provided to the frontend developers,
- Validate the matched API implemented in the Main Program
- Use all of the API functions, through lots of scenarios
- Most of the test functions are belonged to this category
- src/test/features/models
- About the ORM Model classes
- Validate tables, methods, and even materialized views
- through lots of scenarios
- src/test/features/external
- Open virtual external systems
- Validate interactions with this backend server
Anyway, you've to remind that, the Test Automation Program resets the DB schema whenever being run. Therefore, you've to be careful if import data has been stored in the local (or dev) DB server. To avoid the resetting the DB, configure the skipReset
option like below.
Also, the Test Automation Program runs all of the test functions placed into the src/test/features directory. However, those full testing may consume too much time. Therefore, if you want to reduce the testing time by specializing some test functions, use the include
option like below.
- supported options
mode
: mode of the target server- local
- dev
real
include
: test only restricted functions who is containing the special keyword.exclude
: exclude some functions who is containing the special keyword.skipReset
: do not reset the DBcount
: repeating count of the test automation program.
# test in the dev server
npm run test -- --mode=dev
# test without db reset
npm run test -- --skipReset
# test only restricted functions whose name contain the "something" keyword
# do not reset db
npm run test -- --include=something --skipReset
After Definition, client SDK building and Test Automation Program are all prepared, finally you can develop the Main Program. Also, when you complete the Main Program implementation, it would better to validate the implementation through the pre-built SDK and Test Automation Program.
However, do not commit a mistake that writing source codes only in the controller classes. The API Controller must have a role that only intermediation. The main source code should be write down separately following the directory categorizing. For example, source code about DB I/O should be written into the src/providers directory.
If you've committed a new version and pushed it into the repository, you can update the backend server without any distruption. By the npm run update
command, you can let backend server to act those non-distruptive update processes.
- Pull new commit
- Build the new soure code
- Restart the backend server without distruption
To accomplish the non-distruptive update system, the server instance must run the updator program before mounting the backend server program up. If the target backend system is composed with multiple server instances like ELB (Elastic Load Balancer) and target instance is not a type of the master instance, you should write the npm run start:updator:slave
command.
Otherwise, use the npm run start:updator:master
command.
#----
# RUN UPDATOR PROGRAM
#----
# THE INSTANCE IS MASTER
npm run start:updator:master
# THE INSTANCE IS SLAVE
npm run start:updator:slave
#----
# MOUNT THE BACKEND SERVER UP
#----
npm run start real
Sometimes, you may desire to mount the backend server on your local system, not for running the Test Automation Program, but for other reason like the Front-end Application Development.
In that case, you can mount the local backend server up and let it to keep the opening status, until the npm run stop
command be executed, by typing the npm run start local
command.
npm run start local
npm run stop
Also, if someone else committed a new version into the master branch, you can update your local backend server without distruption. It means that, non-distruptive update system even works in the local environment. To activate the non-distruptive update system, run the updator program before mounting the backend server up on your local machine.
# START THE LOCAL BACKEND SERVER WITH UPDATOR PROGRAM
npm run start updator:master
npm run start local
# UPDATE THE LOCAL SERVER WITHOUT DISTRUPTION
npm run update local
To update the dev server is very easy. Just commit a new code into the dev
branch, and type the npm run update dev
command on your local machine. By the command, the dev server will replace its code to the latest and the Non-distruptive Update System would be executed.
npm run update dev
Also, the dev server is designed to test and validate the newly commited source code before releasing to the Real Server. Therefore, even dev server may required to reset its DB like the Local Server who oftens run the Test Automation Program.
# MOVE TO THE PROJECT DIRECTORY
ssh ${dev_address}
cd ${project_directory}
# DO RESET
npm run reset:dev
# REFERENCE - COMMAND SET COMPOSING THE RESET:DEV
git pull
npm install
npm run build
npx pm2 delete all
npm run test -- --mode=dev
npm run start:updator:master
npm run start dev
To update the real server is very easy. Just commit a new code into the real
branch, and type the npm run update real
command on your local machine. By the command, the dev server will replace its code to the latest and the Non-distruptive Update System would be executed.
npm run update real
List of the run commands defined in the package.json are like below:
build
: Compile the source codebuild:api
: Build client SDK libray for the client developersbuild:models
: Build ORM library
dev
: Incremental compilation using the--watch
optionreset:dev
: Restart the dev backend server with DB resetrevert
: Revert the backend server to previous commitnpm run revert local e245tjfg345tq453tae
npm run revert dev e245tjfg345tq453tae
npm run revert real e245tjfg345tq453tae
schema
: Create DB, users and schemas on Localstart
: Start the backend servernpm run start local
npm run start dev
npm run start real
package:api
: Deploy the client SDK librarypackage:models
: Deploy the ORM librarystart:updator:master
: Start non-distruptive update system (master)start:updator:slave
: Start non-distruptive update system (slave)start:reload
: Restart the backend serverstop
: Stop the backend serverstop:updator:master
: Stop non-distruptive update system (master)stop:updator:salve
: Stop non-distruptive update system (slave)test
: Start the Test Automation Programtest:update
: Test the non-distruptive update systemupdate
: Start the non-distruptive update- npm run update dev
- npm run update real
This backend project utilizes the Github Action to run the cloud CI (Continuous Integration) test whenever a commit or PR event occurs. The CI test starts from installing the backend server program to a clean Ubuntu system that nothing has been installed yet.
You know what? All of CI processes, like installing required programs into the clean Ubuntu and compiling and running the Test Automation Program, are defined in the .github/workflows/build.yml
script file. Also passing or failure expressed from the above badge represents outcome of the CI test.
Write the related repositories down.