diff --git a/README.md b/README.md index 049aa75..017969d 100644 --- a/README.md +++ b/README.md @@ -8,8 +8,8 @@ articles to [Medium](https://medium.com/@spencerlepine) and [Dev.to](https://dev ## 💻 Local Development ```sh -$ npm install -$ npm start +$ yarn install +$ yarn start ``` This command starts a local development server and opens up a browser window. Most changes are reflected live without having to restart the server. @@ -17,7 +17,7 @@ This command starts a local development server and opens up a browser window. Mo ## 🏗️ Production Build ```sh -$ npm run build +$ yarn run build ``` This command generates static content into the `build` directory and can be served using any static contents hosting service. @@ -43,7 +43,8 @@ This command generates static content into the `build` directory and can be serv ## ⏳ Development -Clean up files, remove doc folder, renamne blog to content?? Move architecture picture Fix twitter preview images +- [feature] Fix twitter preview images +- [article] Add images to #100daysofCode article from twitter ### Dev.to Articles diff --git a/architecture.png b/architecture.png index 361d6ff..f0523e1 100644 Binary files a/architecture.png and b/architecture.png differ diff --git a/blog/authors.yml b/content/authors.yml similarity index 100% rename from blog/authors.yml rename to content/authors.yml diff --git a/content/building-a-readme-crawler-with-node-js/index.devto b/content/building-a-readme-crawler-with-node-js/index.devto new file mode 100644 index 0000000..2808f2c --- /dev/null +++ b/content/building-a-readme-crawler-with-node-js/index.devto @@ -0,0 +1,172 @@ +--- +title: Building a README Crawler With Node.js +description: An Overview of the Node.js README Web Crawler project and how I created it. # Dev.to +tags: 'crawler, nodejs, repositories' # Dev.to (Max 3) +canonical_url: null # Dev.to +--- + +![Blog Post Thumbnail](./thumbnail.jpg) + +Building a README Crawler with Node.js + +An Overview of the Node.js README Web Crawler project and how I created it. + +A recent project of mine was this Node.js [web crawler](https://github.com/spencerlepine/readme-crawler). Working on that led to this idea for another crawler. I wanted a way to navigate through GitHub and search for obvious typos. I had this idea after stumbling across silly typos on numerous portfolio pages. Perhaps I could help fix these errors and ensure these portfolio/sites are more presentable. + +If you’re interest in the final product, you can find the package [here](https://www.npmjs.com/package/readme-crawler), as well as the [GitHub Repo](https://github.com/spencerlepine/readme-crawler). + +The goal was to create a process for automatically navigating through repositories and creating a method for spell-checking what is found. Doing that manually is time consuming, so the computer will execute that process for us. A [web crawler](https://www.cloudflare.com/learning/bots/what-is-a-web-crawler/) will systematically browse different pages across the internet. The purpose of this specific crawler would be scraping GitHub only. + +Here's the breakdown of this problem: + - Fetch the HTML data from a URL + - Look at the README file displayed in that DOM + - Export README and details README for later use + - Generate a list of repository links in displayed in README + +To execute the algorithm, we will use [Node.js](https://nodejs.org/) (for the JavaScript runtime) and [node-fetch](https://www.npmjs.com/package/node-fetch) (for network requests). This means we will run the code locally from the command line. +For this project, we will have an output folder to store all the README data, as well as a list (queue) of repository URLs to visit. +Before diving into the code, it is important to plan the input and output of the algorithm. For this web crawler, we will start at a valid GitHub repository page, which would be one URL string. After visiting each page with a README, we will export the data into a new file. +Now lets cover the process of requesting a repository page from a URL. For this, we only care about saving the README file that is displayed, and we will ignore any other links that GitHub displays (such as the navbar). We will send a URL request with node-fetch, and retrieve the result of a HTML string. If we convert the HTML string to a DOM Tree, we can search for a specific element. GitHub stores the README file under a div with the class "markdown-body". We can use a library called 'jsdom' to use Browser API methods, and return a specific node. + +```js +const fetchReadMeDOMFromRepoURL = async function (url) { + let destinationDOM; + + await fetch(url) + .then((response) => { + return response.text(); + }) + .then((data) => { + const { document } = new JSDOM(data).window; + const readmeNode = document.querySelector(".markdown-body"); + destinationDOM = readmeNode; + }) + .catch((err) => { + console.warn("Could not fetch url: " + url); + }); + + return destinationDOM; +}; +``` + +Now that we have access to a local node displaying the README content, we can export that data. +Instead of using a spell checker at this stage, I decided to handle that separately and resort to a command-line alternative. This meant the README file could be exported to file locally, and then we process spell checking files after the crawler stops running. +With a README DOM node we will convert it from HTML to a markdown file. Websites give the browser HTML code to display. + +```js +const exportRepoDOM = async function (readmeNode, repoURL, outputFolderPath) { + if (readmeNode) { + const { name, owner, href } = parseGitHubUrl(repoURL); + const folderDestination = `${outputFolderPath}${owner}-${name}/`; + + if (!fs.existsSync(folderDestination)) { + fs.mkdirSync(folderDestination); + } + + const markdown = turndownService.turndown(readmeNode.outerHTML); + createMarkdownFile(markdown, folderDestination); + + const info = `URL=${href}\nGIT_URL=${href}.git\nREPO_NAME=${name}\nOWNER=${owner}\n`; + createInfoFile(info, folderDestination); + } +}; +``` +Before we are finished with this README, we have to traverse the DOM node to find any other GitHub repository links. This will allow us to follow pages and crawl GitHub. Keep in mind though, there could be a README without any links, and we might want to navigate through GitHub some other way. +Since we are using jsdom, we can use the built-in getElementsByTagName method. We only want to save valid GitHub links, so we can use a helper function to test with regex. With that, we can create a list of links found. + +```js +const getLinksFromDOM = function (jsDOM, isUsableLink) { + const validLinks = []; + + const linkElements = (jsDOM && jsDOM.getElementsByTagName("a")) || []; + for (let i = 0; i < linkElements.length; i++) { + const thisElement = linkElements[i]; + const thisUrl = thisElement.href; + + if (isUsableLink(thisUrl)) { + validLinks.push(thisUrl); + } + } + return validLinks; +}; +``` + +Great, so far we retrieve README content from a URL and use that to generate a list of new URLs. If we want to continue searching and finding README files, we can follow the links and repeat entire process. This will recursively crawl the GitHub repositories, allowing us to gather data. +To keep track of the new URLs to visit, each link needs to be stored in a queue, or in terms of data structures, a (stack)[https://www.geeksforgeeks.org/stack-data-structure/]. To implement this, I append all the items of the new list to a file. Whenever the program is ready to fetch the next link, we can pull and remove the first line of the queue file and return it for a new URL string. The link queue will be stored in a file where we append/delete line by line. This was done to avoid storing an array in memory to avoid crashing, and to allow pausing the crawler and any point. + +```js +const saveLinkToQueue = function (linkStr, outputFolderPath, outputFileName) { + + const filePath = outputFolderPath + outputFileName; + if (fs.existsSync(filePath)) { + fs.appendFile(filePath, linkStr + "\r\n", function (err) { + if (err) throw err; + }); + } else { + fs.writeFile(filePath, linkStr + "\r\n", function (err) { + if (err) throw err; + }); + } +}; +``` + +All of these modules together will download README files in an organized output folder. After running the web crawler, we can run a spell checker on the exported data. To do this, I used a package called ‘[yaspeller](https://www.npmjs.com/package/yaspeller)’. Each time I want to correct a README, I can generate a file with all common typos and errors. + +```sh +yaspeller -e ".md" ./ --find-repeat-words --ignore-digits --ignore-urls --only-errors &> "spellcheck.txt" +``` + +And that's it! I now have a way to look through hundreds of repositories. I can spell check tons of README files and help people remove possibly embarrassing typos. There are many ways to expand on this. You could analyze the languages used, the diction, create a Graph, or even run the data through machine learning algorithms. + +```js +run() { + const repositoriesFolder = this.outputFolderPath + "repositories/"; + const linkQueueFile = "linkQueue.txt"; + + const linkListCallback = async function (linkList) { + const outputFolderPath = this.outputFolderPath; + if (this.followReadMeLinks) { + await linkList.forEach(async (link) => { + await saveLinkToQueue(link, outputFolderPath, linkQueueFile); + }); + } + }.bind(this); + + let nextURL =await getNextLinkFromQueue(this.outputFolderPath, linkQueueFile + + while (nextURL) { + fetchAndProcessRepo(nextURL, repositoriesFolder, linkListCallback); + nextURL = await getNextLinkFromQueue(this.outputFolderPath, linkQueueFile); + } + } +``` + + +This project is also available on [npm](https://www.npmjs.com/package/readme-crawler). Install the package and try it yourself! + +```js +import ReadMeCrawler from 'readme-crawler'; + +var crawler = new ReadMeCrawler({ + startUrl: 'https://github.com/jnv/lists', + followReadMeLinks: true, + outputFolderPath: './output/' + }); + + crawler.run(); + +``` + +Not only was this project fun, but I was able to learn about using Node.js and other npm packages. I spent many hours reading about the 'fs' module, and thinking of different ways to process/store the data. I also tried working with executing commands with Node.js, to run the spell checker synchronously on each result. However, It was difficult to pipe the stdout correctly, and I realized that spell checking was a separate concern apart from web crawling anyways. +Overall, this was a great learning experience and problem solving exercise. + +--- + +View the source code on [GitHub](https://github.com/spencerlepine/readme-crawler). + +Follow my journey or connect with me here: +- LinkedIn: [/in/spencerlepine](https://www.linkedin.com/in/spencerlepine/) +- Email: [spencer.sayhello@gmail.com](mailto:spencer.sayhello@gmail.com) +- Portfolio: [spencerlepine.com](https://spencerlepine.com) +- https://github.com/spencerlepine +- Twitter: [@spencerlepine](https://twitter.com/spencerlepine) diff --git a/content/building-a-readme-crawler-with-node-js/index.md b/content/building-a-readme-crawler-with-node-js/index.md new file mode 100644 index 0000000..1fe923f --- /dev/null +++ b/content/building-a-readme-crawler-with-node-js/index.md @@ -0,0 +1,189 @@ +--- +title: Building a README Crawler With Node.js +slug: building-a-web-crawler-with-node-js +tags: [GitHub, Repositories, WebCrawler, Node.js] +authors: [spencerlepine] +date: 2021-08-14T12:00 +--- + +![Blog Post Thumbnail](./thumbnail.jpg) + +Building a README Crawler with Node.js + +An Overview of the Node.js README Web Crawler project and how I created it. + +A recent project of mine was this Node.js [web crawler](https://github.com/spencerlepine/readme-crawler). Working on that led to this idea for another crawler. I wanted a way to +navigate through GitHub and search for obvious typos. I had this idea after stumbling across silly typos on numerous portfolio pages. Perhaps I could help fix these errors and +ensure these portfolio/sites are more presentable. + +If you’re interest in the final product, you can find the package [here](https://www.npmjs.com/package/readme-crawler), as well as the +[GitHub Repo](https://github.com/spencerlepine/readme-crawler). + +The goal was to create a process for automatically navigating through repositories and creating a method for spell-checking what is found. Doing that manually is time consuming, so +the computer will execute that process for us. A [web crawler](https://www.cloudflare.com/learning/bots/what-is-a-web-crawler/) will systematically browse different pages across +the internet. The purpose of this specific crawler would be scraping GitHub only. + +Here's the breakdown of this problem: + +- Fetch the HTML data from a URL +- Look at the README file displayed in that DOM +- Export README and details README for later use +- Generate a list of repository links in displayed in README + +To execute the algorithm, we will use [Node.js](https://nodejs.org/) (for the JavaScript runtime) and [node-fetch](https://www.npmjs.com/package/node-fetch) (for network requests). +This means we will run the code locally from the command line. For this project, we will have an output folder to store all the README data, as well as a list (queue) of repository +URLs to visit. Before diving into the code, it is important to plan the input and output of the algorithm. For this web crawler, we will start at a valid GitHub repository page, +which would be one URL string. After visiting each page with a README, we will export the data into a new file. Now lets cover the process of requesting a repository page from a +URL. For this, we only care about saving the README file that is displayed, and we will ignore any other links that GitHub displays (such as the navbar). We will send a URL request +with node-fetch, and retrieve the result of a HTML string. If we convert the HTML string to a DOM Tree, we can search for a specific element. GitHub stores the README file under a +div with the class "markdown-body". We can use a library called 'jsdom' to use Browser API methods, and return a specific node. + +```js +const fetchReadMeDOMFromRepoURL = async function (url) { + let destinationDOM; + + await fetch(url) + .then(response => { + return response.text(); + }) + .then(data => { + const { document } = new JSDOM(data).window; + const readmeNode = document.querySelector('.markdown-body'); + destinationDOM = readmeNode; + }) + .catch(err => { + console.warn('Could not fetch url: ' + url); + }); + + return destinationDOM; +}; +``` + +Now that we have access to a local node displaying the README content, we can export that data. Instead of using a spell checker at this stage, I decided to handle that separately +and resort to a command-line alternative. This meant the README file could be exported to file locally, and then we process spell checking files after the crawler stops running. +With a README DOM node we will convert it from HTML to a markdown file. Websites give the browser HTML code to display. + +```js +const exportRepoDOM = async function (readmeNode, repoURL, outputFolderPath) { + if (readmeNode) { + const { name, owner, href } = parseGitHubUrl(repoURL); + const folderDestination = `${outputFolderPath}${owner}-${name}/`; + + if (!fs.existsSync(folderDestination)) { + fs.mkdirSync(folderDestination); + } + + const markdown = turndownService.turndown(readmeNode.outerHTML); + createMarkdownFile(markdown, folderDestination); + + const info = `URL=${href}\nGIT_URL=${href}.git\nREPO_NAME=${name}\nOWNER=${owner}\n`; + createInfoFile(info, folderDestination); + } +}; +``` + +Before we are finished with this README, we have to traverse the DOM node to find any other GitHub repository links. This will allow us to follow pages and crawl GitHub. Keep in +mind though, there could be a README without any links, and we might want to navigate through GitHub some other way. Since we are using jsdom, we can use the built-in +getElementsByTagName method. We only want to save valid GitHub links, so we can use a helper function to test with regex. With that, we can create a list of links found. + +```js +const getLinksFromDOM = function (jsDOM, isUsableLink) { + const validLinks = []; + + const linkElements = (jsDOM && jsDOM.getElementsByTagName('a')) || []; + for (let i = 0; i < linkElements.length; i++) { + const thisElement = linkElements[i]; + const thisUrl = thisElement.href; + + if (isUsableLink(thisUrl)) { + validLinks.push(thisUrl); + } + } + return validLinks; +}; +``` + +Great, so far we retrieve README content from a URL and use that to generate a list of new URLs. If we want to continue searching and finding README files, we can follow the links +and repeat entire process. This will recursively crawl the GitHub repositories, allowing us to gather data. To keep track of the new URLs to visit, each link needs to be stored in +a queue, or in terms of data structures, a (stack)[https://www.geeksforgeeks.org/stack-data-structure/]. To implement this, I append all the items of the new list to a file. +Whenever the program is ready to fetch the next link, we can pull and remove the first line of the queue file and return it for a new URL string. The link queue will be stored in a +file where we append/delete line by line. This was done to avoid storing an array in memory to avoid crashing, and to allow pausing the crawler and any point. + +```js +const saveLinkToQueue = function (linkStr, outputFolderPath, outputFileName) { + const filePath = outputFolderPath + outputFileName; + if (fs.existsSync(filePath)) { + fs.appendFile(filePath, linkStr + '\r\n', function (err) { + if (err) throw err; + }); + } else { + fs.writeFile(filePath, linkStr + '\r\n', function (err) { + if (err) throw err; + }); + } +}; +``` + +All of these modules together will download README files in an organized output folder. After running the web crawler, we can run a spell checker on the exported data. To do this, +I used a package called ‘[yaspeller](https://www.npmjs.com/package/yaspeller)’. Each time I want to correct a README, I can generate a file with all common typos and errors. + +```sh +yaspeller -e ".md" ./ --find-repeat-words --ignore-digits --ignore-urls --only-errors &> "spellcheck.txt" +``` + +And that's it! I now have a way to look through hundreds of repositories. I can spell check tons of README files and help people remove possibly embarrassing typos. There are many +ways to expand on this. You could analyze the languages used, the diction, create a Graph, or even run the data through machine learning algorithms. + +```js +run() { + const repositoriesFolder = this.outputFolderPath + "repositories/"; + const linkQueueFile = "linkQueue.txt"; + + const linkListCallback = async function (linkList) { + const outputFolderPath = this.outputFolderPath; + if (this.followReadMeLinks) { + await linkList.forEach(async (link) => { + await saveLinkToQueue(link, outputFolderPath, linkQueueFile); + }); + } + }.bind(this); + + let nextURL =await getNextLinkFromQueue(this.outputFolderPath, linkQueueFile + + while (nextURL) { + fetchAndProcessRepo(nextURL, repositoriesFolder, linkListCallback); + nextURL = await getNextLinkFromQueue(this.outputFolderPath, linkQueueFile); + } + } +``` + +This project is also available on [npm](https://www.npmjs.com/package/readme-crawler). Install the package and try it yourself! + +```js +import ReadMeCrawler from 'readme-crawler'; + +var crawler = new ReadMeCrawler({ + startUrl: 'https://github.com/jnv/lists', + followReadMeLinks: true, + outputFolderPath: './output/', +}); + +crawler.run(); +``` + +Not only was this project fun, but I was able to learn about using Node.js and other npm packages. I spent many hours reading about the 'fs' module, and thinking of different ways +to process/store the data. I also tried working with executing commands with Node.js, to run the spell checker synchronously on each result. However, It was difficult to pipe the +stdout correctly, and I realized that spell checking was a separate concern apart from web crawling anyways. Overall, this was a great learning experience and problem solving +exercise. + +--- + +View the source code on [GitHub](https://github.com/spencerlepine/readme-crawler). + +Follow my journey or connect with me here: + +- LinkedIn: [/in/spencerlepine](https://www.linkedin.com/in/spencerlepine/) +- Email: [spencer.sayhello@gmail.com](mailto:spencer.sayhello@gmail.com) +- Portfolio: [spencerlepine.com](https://spencerlepine.com) +- https://github.com/spencerlepine +- Twitter: [@spencerlepine](https://twitter.com/spencerlepine) diff --git a/blog/building-a-readme-crawler-with-node-js/building-a-readme-crawler-with-node-js.md b/content/building-a-readme-crawler-with-node-js/index.medium similarity index 95% rename from blog/building-a-readme-crawler-with-node-js/building-a-readme-crawler-with-node-js.md rename to content/building-a-readme-crawler-with-node-js/index.medium index 8874ed5..4144a67 100644 --- a/blog/building-a-readme-crawler-with-node-js/building-a-readme-crawler-with-node-js.md +++ b/content/building-a-readme-crawler-with-node-js/index.medium @@ -1,9 +1,7 @@ --- title: Building a README Crawler With Node.js -slug: building-a-web-crawler-with-node-js -tags: [GitHub, Repositories, WebCrawler, Node.js] -authors: [spencerlepine] -date: 2021-08-14T12:00 +description: An Overview of the Node.js README Web Crawler project and how I created it. +publish_status: "draft" --- ![Blog Post Thumbnail](./thumbnail.jpg) @@ -165,8 +163,9 @@ Overall, this was a great learning experience and problem solving exercise. View the source code on [GitHub](https://github.com/spencerlepine/readme-crawler). -Also find me here: -* [Twitter (@spencerlepine)](https://twitter.com/SpencerLepine) -* [GitHub (@spencerlepine)](https://github.com/spencerlepine) -* [LinkedIn](https://www.linkedin.com/in/spencer-lepine/) -* [YouTube (Spencer Lepine)](https://www.youtube.com/channel/UCBL6vAHJZqUlyJp-rcFU55Q) +Follow my journey or connect with me here: +- LinkedIn: [/in/spencerlepine](https://www.linkedin.com/in/spencerlepine/) +- Email: [spencer.sayhello@gmail.com](mailto:spencer.sayhello@gmail.com) +- Portfolio: [spencerlepine.com](https://spencerlepine.com) +- https://github.com/spencerlepine +- Twitter: [@spencerlepine](https://twitter.com/spencerlepine) diff --git a/blog/building-a-readme-crawler-with-node-js/thumbnail.jpg b/content/building-a-readme-crawler-with-node-js/thumbnail.jpg similarity index 100% rename from blog/building-a-readme-crawler-with-node-js/thumbnail.jpg rename to content/building-a-readme-crawler-with-node-js/thumbnail.jpg diff --git a/content/building-llama-as-a-service/index.devto b/content/building-llama-as-a-service/index.devto new file mode 100644 index 0000000..a1a6884 --- /dev/null +++ b/content/building-llama-as-a-service/index.devto @@ -0,0 +1,192 @@ +--- +title: Building Llama as a Service (LaaS) +description: Creating the world’s first Llama as a Service, the random image API you didn't realize you needed. # Dev.to +tags: 'nodejs, mongodb, heroku' # Dev.to (Max 3) +canonical_url: null # Dev.to +--- + +![Blog Post Thumbnail](./thumbnail.jpg) + +This is a walkthrough of the development process and system design engineering for the Llama as a Service. LaaS is a website and public API that can serve random Llama images. It will respond with a single image URL, or even a list. + +- Visit the [LaaS website](https://llama-as-a-service.netlify.app/) for a demo +- View the source code on [GitHub](https://github.com/orgs/llama-as-a-service/repositories) +- View the walkthrough [YouTube video](https://www.youtube.com/watch?v=uDQUA_JTMJk) + +### What I Learned + +For this project, there is a frontend built with [React](https://reactjs.org/) hosted on [Netlify](https://www.netlify.com/), connected to the backend. + +I built each API with Node.js, Express, and Docker. Services connected to a NoSQL [MongoDB](https://www.mongodb.com/) database. + +Each service is in an independent repository to maintain separation of concerns. It would have been possible to build this in a monorepo, but it was good practice. + +Each repository uses [GitHub Actions](https://docs.github.com/en/actions) to build, and test the code on every push. Express API was deployed to [Heroku](https://www.heroku.com/) when the main branch was pushed. + +With each app containerized with [Docker](https://www.docker.com/), this allows it to be run on any other developer's machine also running Docker. Although I had automated deployments to Heroku without this, I decided to upload each service to a container registry. + +Each repository also used a GitHub Actions workflow to automatically tag and version updates and releases. It would then build and publish the most up to date Docker image, and release it to the [GitHub Container Registry](https://ghcr.io/). + +For future use, this makes it crazy easy to deploy a Kubernetes cluster to the cloud, with a simple `docker pull ghcr.io/OWNER/IMAGE_NAME` command. However, that was beyond the scope of this project because of zero budget. + +To manage the environment variables, I was able to share Secrets to the GitHub Action workflows, which are encrypted, and can be shared across an entire organization (meaning multiple repos could access the variables). This allowed me to deploy my code securely to Heroku, without ever hard-coding the API keys. + +Another tool I used was [Artillery](https://www.artillery.io/) for load testing on my local machine. + +Instead of npm, I tried using `yarn` for the package manager, and it was WAY faster in GitHub Actions even without caching enabled. + +Although they did not make it into production, I experimented with the [RabbitMQ](https://www.rabbitmq.com/) message broker, Python (Django, Flask), Kubernetes + minikube, [JWT](https://jwt.io/), and [NGINX](https://www.nginx.com/). This was a hobby project, but I intended to learn about microservices along the way. + +### Demonstration +Here is a screen shot of the [LaaS](https://llama-as-a-service.netlify.app/) website. + +![Frontend Screenshot](https://user-images.githubusercontent.com/60903378/179845930-8bd90991-42da-400b-8983-543030be0502.png) + +If you would like to try out the API, simply made a GET request to the following endpoint: + +`https://llama-as-a-service-images.herokuapp.com/random` +### Creating an API +First, I started with a simple API built with Node.js and Express, containerized with Docker. I set up GitHub Actions for CI to build and test it on each push. This will later connect to the database, to respond with image URLs. + +![Images API](https://user-images.githubusercontent.com/60903378/179846653-c04f2636-75a0-47a0-9789-d58a0cd3928e.png) + +Although this API doesn’t NEED to scale to millions of users, it was a valuable exercise for building and scaling a system. I aimed for a minimum latency of 300ms with 200 RPS. +### Image Database +With an API ready to connect to the database, it was time to choose between a NoSQL or SQL database. + +The answer is obvious for this use case. Let’s walk through the data we have, and the use cases. + +We are going to store one single table with image URLs. This could easily be done in either database, but there is one key factor. We need a way to randomly pull a list of images from the database. + +A SQL database makes it simple to query a random row, however, this is not horizontally scalable, and with a large data set, we are replicating the ENTIRE database to each new node. + +On the other hand, NoSQL databases are horizontally scalable; which leads me to Cassandra, but unfortunately it is very difficult to pull random selections from this type of NoSQL database. + +Finally, I settled with MongoDB, which has a built-in `$sample` method to pull from the records. + +![Mongo Database](https://user-images.githubusercontent.com/60903378/179846661-8e20b67d-6d28-4971-939b-a7d347f15ee4.png) + +Once I got the MongoDB database running locally with Docker, I created a quick script to seed the database. + +Now it’s time to connect the API to the database. + +### Connecting API to the Database +Next, I used the `mongoose` Node.js API to connect to the local MongoDB. + +I created two endpoints; one to upload an image URL, and another to retrieve a random list of images. + +![API Database Connection](https://user-images.githubusercontent.com/60903378/179846674-642b4412-3385-4b0a-9640-036225ce1423.png) + + +### Endpoint Load Testing +To experiment with scaling the API, I wanted to do load testing. Keep in mind that this API does not have much logic, meaning caching, or optimizing the code's performance, will have a huge impact. + +I found a tool for load testing called [Artillery](https://www.artillery.io/). Following [this guide](https://blog.appsignal.com/2021/11/10/a-guide-to-load-testing-nodejs-apis-with-artillery.html) I installed Artillery and began research for the test configuration. + +The API currently has the `/random` endpoint to return an image URL (a string), with very little computation. Let’s stress test this to see the current traffic limit. + + +The random list endpoint is what we need to optimize. For the starting algorithm though, I seeded 100 image records into the database, and then pulled the ENTIRE list from the database each request. The API would then choose 25 random elements to return. Let’s benchmark how this performs with load testing. + +With the first run, API, the limit on the `/random?count=25` endpoint was 225 RPS over 15 seconds, with 99% of the response times were under 300ms. We can improve this. + +![Load Testing Latency Chart](https://user-images.githubusercontent.com/60903378/179846688-33198d9a-f9fa-4611-ba15-7161f2bf87ce.png) + +![Load Testing](https://user-images.githubusercontent.com/60903378/179846696-e82bfe4e-b9b9-4114-9b59-48785b945021.png) + +### Optimizing the Endpoint +We have many records of image URLs in the database. Somehow, we need to efficiently transform these into a list, pulling random selections from the database. + +![Random Database Query](https://user-images.githubusercontent.com/60903378/179846759-e5654196-093c-41a2-a5aa-06029ca5c47a.png) + +Let’s optimize the query for pulling documents from the database. Using a special mongodb query, we can drastically reduce the computational load for a single request. Running locally in postman, `random?count=25` endpoint went from ~150ms for a single request, to \<50ms. + +This is the only code we need for this endpoint, compared to the previous 20 lines and O(n^2) space. + +![Random Query Code](https://user-images.githubusercontent.com/60903378/179846772-fdcf54a1-34eb-42a1-a4cd-8b05ba783a7f.png) + +With the new query, the endpoint maintains 99% sub-300ms response time with a max of 440 RPS over 15 seconds. + +![Latency Improvements Chart](https://user-images.githubusercontent.com/60903378/179846777-3518451f-33f2-4638-b4d9-c614db7d52e5.png) + + +### Horizontally Scaling the API +With the containerized Node.js/Express API, I could run multiple containers, scaling to handle more traffic. Using a tool called [minikube](https://github.com/kubernetes/minikube), we can easily spin up a local [Kubernetes](https://github.com/kubernetes/kubernetes) cluster to horizontally scale Docker containers. It was possible to keep one shared instance of the database, and many APIs were routed with an internal Kubernetes load balancer. + +Horizontally scaling the API to two instances, the random endpoint maintains 99% sub-300ms response time with a max of 650 RPS over 15 seconds. Three API Instances => 99% sub-300ms response time with a max of 1000 RPS over 15 seconds. Five API Instances => 99% sub-300ms response time with a max of 1200 RPS over 15 seconds. + +In practice, five instances were the limit of scaling the API horizontally. Even with more instances, the traffic was never sub 300ms response time. Note, this is dependent on the hardware of my local machine, and not accounting for cross-network latency in the real world. + +With scaling, we can achieve higher throughput, allowing more traffic to flow, and resiliency, where a failed node can simply be replaced. + +![API Horizontal Scale](https://user-images.githubusercontent.com/60903378/179846784-c605f7e8-856a-4d8d-b1ff-e9ad16d34c33.png) + +Since the image responses are intended to be random, we cannot cache the responses. It would be possible to scale the database with a slave/master system, but without a large data set, it is not worth the time to test. The bottleneck is most likely the API and connections to the database, versus MongoDB not handling read requests. It may be possible to improve the read times with a REDIS database, using in-memory caching, but that is overkill for this project. +### Setting up Authentication +After playing around with load testing, I wanted to explore [JSON Web Tokens](https://jwt.io/) and build an API to handle authentication. + +This auth API will generate tokens, which will be sent back to the client as headers. The tokens headers are stored client-side (e.g. cookies, local storage), and sent to the backend each request. + +If we expand the backend, we could include the authentication logic in each microservice. + +![API Coupled Services](https://user-images.githubusercontent.com/60903378/179846797-c965ec7f-4076-4ed7-9b2e-4909004fec6e.png) + + +Not practical. Instead, we can decouple the logic into its own service as shown below: + +![API Auth Gateway](https://user-images.githubusercontent.com/60903378/179846807-6f650b2d-165f-4345-8f93-34633b2f3cc4.png) + +### Creating a Gateway API +Instead of exposing the users directly to each microservice, we should route ALL traffic from the clients to the Gateway API. For this, I chose the same tech stack of Node.js/Express. Using a library, I was able to set up a proxy to the other services. In the future, this could be very useful to standardize requests to the backend, track usage, forward data to a logging microservice, talk to a message broker, and more. + +### Environment Variables and Configuration +Most of the system built, I needed to simplify the process for configuring the Docker containers locally, and how environment variables would be shared to each. Keep in mind, each service needed to access these in GitHub Actions as well, during deployment. + +I used the `docker-compose` files to easily spin up the containers locally. I used default values for the environment variables for local development, and kept the config files separated so it was easy to follow. + +This step was just a process of carefully writing the Docker and docker-compose files, and setting up GitHub Actions Secrets. The code could not run without having all env variables, could be hard to debug locally or lead to ambiguity for other developers. + + +### A Simple Frontend +I would talk about building the frontend, but it is just a single page React app I built quickly. It does use a CSS library called [Bulma](https://bulma.io/), which is similar to tailwind and worth checking out. I did spend a day implementing a login/signup page, but this was just for the learning experience, and not what I wanted in the final product. + +### GitHub Actions Testing and Deployment +With most of the code written, it was time to deploy the app. This was actually a bumpy road because I was not sure how to approach this. I was keeping each component in its own repository on my personal GitHub Account, which was getting hard to keep track of. + +My solution was to create the [Llama as a Service](https://github.com/llama-as-a-service) GitHub Organization, which also allowed me to store organization-wide secrets that any repository could access. + +Using GitHub Actions, I created workflows to build and test code on every push, and deploy to main branch Heroku (and Netlify for the frontend). + +I also created a workflow to tag and version every update, and release the Docker image to the [GitHub Container Registry](https://ghcr.io/). These packages could be private to the organization, or public. I did not end up using these published containers, but it was really dope to see everything automated. + +### Deploying to Production + +So after deploying the gateway API, frontend, and backend, I hoped all the services would be connected in production. For some reason the [http-proxy-middleware](https://www.npmjs.com/package/http-proxy-middleware) was causing problems, and it was not worth redesigning the whole system. I was not ready to work with deploying a Kubernetes Cluster, so I did not use the GHCR Docker packages for deploying. + +Instead, I just stripped away the extra services that I had been working on, and stuck with a simple system to deploy. For the final product, there is the frontend deployed on Netlify, which connects to the API on Heroku, with talks to the MongoDB Atlas database (in the cloud). + +![Application Architecture Diagram](https://user-images.githubusercontent.com/60903378/179846810-754daa79-2639-4550-a7dd-e0be88ef71bf.png) + +### View the Source Code +If you wish to view all of the source code for this project, you can look through each repository here: + +- GitHub Organization: [github.com/llama-as-a-service](https://github.com/llama-as-a-service) +- All the GHCR Packages: [github.com/orgs/llama-as-a-service/packages](https://github.com/orgs/llama-as-a-service/packages) +- Frontend - [github.com/llama-as-a-service/frontend](https://github.com/llama-as-a-service/frontend) +- Images API - [github.com/llama-as-a-service/images-service](https://github.com/llama-as-a-service/images-service) +- Authentication API - [github.com/llama-as-a-service/auth-service](https://github.com/llama-as-a-service/auth-service) +- Gateway API - [github.com/llama-as-a-service/gateway-service](https://github.com/llama-as-a-service/gateway-service) + + +If you want to have a repository with Node.js, Express, and Docker set up with GitHub Actions, check out the [boilerplate repository here](https://github.com/llama-as-a-service/express-docker-boilerplate) + +If you are interested in more projects by me, you can check out the [ManyShiba Twitter bot](https://spencerlepine.github.io/blog/manyshiba-the-worlds-greatest-twitter-bot), or more on my website. + +--- + +Follow my journey or connect with me here: +- LinkedIn: [/in/spencerlepine](https://www.linkedin.com/in/spencerlepine/) +- Email: [spencer.sayhello@gmail.com](mailto:spencer.sayhello@gmail.com) +- Portfolio: [spencerlepine.com](https://spencerlepine.com) +- https://github.com/spencerlepine +- Twitter: [@spencerlepine](https://twitter.com/spencerlepine) diff --git a/content/building-llama-as-a-service/index.md b/content/building-llama-as-a-service/index.md new file mode 100644 index 0000000..22111eb --- /dev/null +++ b/content/building-llama-as-a-service/index.md @@ -0,0 +1,240 @@ +--- +title: Building Llama as a Service (LaaS) +slug: building-llama-as-a-service +tags: [Express, Docker, MongoDB, Node.js, Heroku, GitHub Actions] +authors: [spencerlepine] +date: 2022-07-12T12:00 +--- + +![Blog Post Thumbnail](./thumbnail.jpg) + +This is a walkthrough of the development process and system design engineering for the Llama as a Service. LaaS is a website and public API that can serve random Llama images. It +will respond with a single image URL, or even a list. + +- Visit the [LaaS website](https://llama-as-a-service.netlify.app/) for a demo +- View the source code on [GitHub](https://github.com/orgs/llama-as-a-service/repositories) +- View the walkthrough [YouTube video](https://www.youtube.com/watch?v=uDQUA_JTMJk) + +### What I Learned + +For this project, there is a frontend built with [React](https://reactjs.org/) hosted on [Netlify](https://www.netlify.com/), connected to the backend. + +I built each API with Node.js, Express, and Docker. Services connected to a NoSQL [MongoDB](https://www.mongodb.com/) database. + +Each service is in an independent repository to maintain separation of concerns. It would have been possible to build this in a monorepo, but it was good practice. + +Each repository uses [GitHub Actions](https://docs.github.com/en/actions) to build, and test the code on every push. Express API was deployed to [Heroku](https://www.heroku.com/) +when the main branch was pushed. + +With each app containerized with [Docker](https://www.docker.com/), this allows it to be run on any other developer's machine also running Docker. Although I had automated +deployments to Heroku without this, I decided to upload each service to a container registry. + +Each repository also used a GitHub Actions workflow to automatically tag and version updates and releases. It would then build and publish the most up to date Docker image, and +release it to the [GitHub Container Registry](https://ghcr.io/). + +For future use, this makes it crazy easy to deploy a Kubernetes cluster to the cloud, with a simple `docker pull ghcr.io/OWNER/IMAGE_NAME` command. However, that was beyond the +scope of this project because of zero budget. + +To manage the environment variables, I was able to share Secrets to the GitHub Action workflows, which are encrypted, and can be shared across an entire organization (meaning +multiple repos could access the variables). This allowed me to deploy my code securely to Heroku, without ever hard-coding the API keys. + +Another tool I used was [Artillery](https://www.artillery.io/) for load testing on my local machine. + +Instead of npm, I tried using `yarn` for the package manager, and it was WAY faster in GitHub Actions even without caching enabled. + +Although they did not make it into production, I experimented with the [RabbitMQ](https://www.rabbitmq.com/) message broker, Python (Django, Flask), Kubernetes + minikube, +[JWT](https://jwt.io/), and [NGINX](https://www.nginx.com/). This was a hobby project, but I intended to learn about microservices along the way. + +### Demonstration + +Here is a screen shot of the [LaaS](https://llama-as-a-service.netlify.app/) website. + +![Frontend Screenshot](https://user-images.githubusercontent.com/60903378/179845930-8bd90991-42da-400b-8983-543030be0502.png) + +If you would like to try out the API, simply made a GET request to the following endpoint: + +`https://llama-as-a-service-images.herokuapp.com/random` + +### Creating an API + +First, I started with a simple API built with Node.js and Express, containerized with Docker. I set up GitHub Actions for CI to build and test it on each push. This will later +connect to the database, to respond with image URLs. + +![Images API](https://user-images.githubusercontent.com/60903378/179846653-c04f2636-75a0-47a0-9789-d58a0cd3928e.png) + +Although this API doesn’t NEED to scale to millions of users, it was a valuable exercise for building and scaling a system. I aimed for a minimum latency of 300ms with 200 RPS. + +### Image Database + +With an API ready to connect to the database, it was time to choose between a NoSQL or SQL database. + +The answer is obvious for this use case. Let’s walk through the data we have, and the use cases. + +We are going to store one single table with image URLs. This could easily be done in either database, but there is one key factor. We need a way to randomly pull a list of images +from the database. + +A SQL database makes it simple to query a random row, however, this is not horizontally scalable, and with a large data set, we are replicating the ENTIRE database to each new +node. + +On the other hand, NoSQL databases are horizontally scalable; which leads me to Cassandra, but unfortunately it is very difficult to pull random selections from this type of NoSQL +database. + +Finally, I settled with MongoDB, which has a built-in `$sample` method to pull from the records. + +![Mongo Database](https://user-images.githubusercontent.com/60903378/179846661-8e20b67d-6d28-4971-939b-a7d347f15ee4.png) + +Once I got the MongoDB database running locally with Docker, I created a quick script to seed the database. + +Now it’s time to connect the API to the database. + +### Connecting API to the Database + +Next, I used the `mongoose` Node.js API to connect to the local MongoDB. + +I created two endpoints; one to upload an image URL, and another to retrieve a random list of images. + +![API Database Connection](https://user-images.githubusercontent.com/60903378/179846674-642b4412-3385-4b0a-9640-036225ce1423.png) + +### Endpoint Load Testing + +To experiment with scaling the API, I wanted to do load testing. Keep in mind that this API does not have much logic, meaning caching, or optimizing the code's performance, will +have a huge impact. + +I found a tool for load testing called [Artillery](https://www.artillery.io/). Following +[this guide](https://blog.appsignal.com/2021/11/10/a-guide-to-load-testing-nodejs-apis-with-artillery.html) I installed Artillery and began research for the test configuration. + +The API currently has the `/random` endpoint to return an image URL (a string), with very little computation. Let’s stress test this to see the current traffic limit. + +The random list endpoint is what we need to optimize. For the starting algorithm though, I seeded 100 image records into the database, and then pulled the ENTIRE list from the +database each request. The API would then choose 25 random elements to return. Let’s benchmark how this performs with load testing. + +With the first run, API, the limit on the `/random?count=25` endpoint was 225 RPS over 15 seconds, with 99% of the response times were under 300ms. We can improve this. + +![Load Testing Latency Chart](https://user-images.githubusercontent.com/60903378/179846688-33198d9a-f9fa-4611-ba15-7161f2bf87ce.png) + +![Load Testing](https://user-images.githubusercontent.com/60903378/179846696-e82bfe4e-b9b9-4114-9b59-48785b945021.png) + +### Optimizing the Endpoint + +We have many records of image URLs in the database. Somehow, we need to efficiently transform these into a list, pulling random selections from the database. + +![Random Database Query](https://user-images.githubusercontent.com/60903378/179846759-e5654196-093c-41a2-a5aa-06029ca5c47a.png) + +Let’s optimize the query for pulling documents from the database. Using a special mongodb query, we can drastically reduce the computational load for a single request. Running +locally in postman, `random?count=25` endpoint went from ~150ms for a single request, to \<50ms. + +This is the only code we need for this endpoint, compared to the previous 20 lines and O(n^2) space. + +![Random Query Code](https://user-images.githubusercontent.com/60903378/179846772-fdcf54a1-34eb-42a1-a4cd-8b05ba783a7f.png) + +With the new query, the endpoint maintains 99% sub-300ms response time with a max of 440 RPS over 15 seconds. + +![Latency Improvements Chart](https://user-images.githubusercontent.com/60903378/179846777-3518451f-33f2-4638-b4d9-c614db7d52e5.png) + +### Horizontally Scaling the API + +With the containerized Node.js/Express API, I could run multiple containers, scaling to handle more traffic. Using a tool called [minikube](https://github.com/kubernetes/minikube), +we can easily spin up a local [Kubernetes](https://github.com/kubernetes/kubernetes) cluster to horizontally scale Docker containers. It was possible to keep one shared instance of +the database, and many APIs were routed with an internal Kubernetes load balancer. + +Horizontally scaling the API to two instances, the random endpoint maintains 99% sub-300ms response time with a max of 650 RPS over 15 seconds. Three API Instances => 99% sub-300ms +response time with a max of 1000 RPS over 15 seconds. Five API Instances => 99% sub-300ms response time with a max of 1200 RPS over 15 seconds. + +In practice, five instances were the limit of scaling the API horizontally. Even with more instances, the traffic was never sub 300ms response time. Note, this is dependent on the +hardware of my local machine, and not accounting for cross-network latency in the real world. + +With scaling, we can achieve higher throughput, allowing more traffic to flow, and resiliency, where a failed node can simply be replaced. + +![API Horizontal Scale](https://user-images.githubusercontent.com/60903378/179846784-c605f7e8-856a-4d8d-b1ff-e9ad16d34c33.png) + +Since the image responses are intended to be random, we cannot cache the responses. It would be possible to scale the database with a slave/master system, but without a large data +set, it is not worth the time to test. The bottleneck is most likely the API and connections to the database, versus MongoDB not handling read requests. It may be possible to +improve the read times with a REDIS database, using in-memory caching, but that is overkill for this project. + +### Setting up Authentication + +After playing around with load testing, I wanted to explore [JSON Web Tokens](https://jwt.io/) and build an API to handle authentication. + +This auth API will generate tokens, which will be sent back to the client as headers. The tokens headers are stored client-side (e.g. cookies, local storage), and sent to the +backend each request. + +If we expand the backend, we could include the authentication logic in each microservice. + +![API Coupled Services](https://user-images.githubusercontent.com/60903378/179846797-c965ec7f-4076-4ed7-9b2e-4909004fec6e.png) + +Not practical. Instead, we can decouple the logic into its own service as shown below: + +![API Auth Gateway](https://user-images.githubusercontent.com/60903378/179846807-6f650b2d-165f-4345-8f93-34633b2f3cc4.png) + +### Creating a Gateway API + +Instead of exposing the users directly to each microservice, we should route ALL traffic from the clients to the Gateway API. For this, I chose the same tech stack of +Node.js/Express. Using a library, I was able to set up a proxy to the other services. In the future, this could be very useful to standardize requests to the backend, track usage, +forward data to a logging microservice, talk to a message broker, and more. + +### Environment Variables and Configuration + +Most of the system built, I needed to simplify the process for configuring the Docker containers locally, and how environment variables would be shared to each. Keep in mind, each +service needed to access these in GitHub Actions as well, during deployment. + +I used the `docker-compose` files to easily spin up the containers locally. I used default values for the environment variables for local development, and kept the config files +separated so it was easy to follow. + +This step was just a process of carefully writing the Docker and docker-compose files, and setting up GitHub Actions Secrets. The code could not run without having all env +variables, could be hard to debug locally or lead to ambiguity for other developers. + +### A Simple Frontend + +I would talk about building the frontend, but it is just a single page React app I built quickly. It does use a CSS library called [Bulma](https://bulma.io/), which is similar to +tailwind and worth checking out. I did spend a day implementing a login/signup page, but this was just for the learning experience, and not what I wanted in the final product. + +### GitHub Actions Testing and Deployment + +With most of the code written, it was time to deploy the app. This was actually a bumpy road because I was not sure how to approach this. I was keeping each component in its own +repository on my personal GitHub Account, which was getting hard to keep track of. + +My solution was to create the [Llama as a Service](https://github.com/llama-as-a-service) GitHub Organization, which also allowed me to store organization-wide secrets that any +repository could access. + +Using GitHub Actions, I created workflows to build and test code on every push, and deploy to main branch Heroku (and Netlify for the frontend). + +I also created a workflow to tag and version every update, and release the Docker image to the [GitHub Container Registry](https://ghcr.io/). These packages could be private to the +organization, or public. I did not end up using these published containers, but it was really dope to see everything automated. + +### Deploying to Production + +So after deploying the gateway API, frontend, and backend, I hoped all the services would be connected in production. For some reason the +[http-proxy-middleware](https://www.npmjs.com/package/http-proxy-middleware) was causing problems, and it was not worth redesigning the whole system. I was not ready to work with +deploying a Kubernetes Cluster, so I did not use the GHCR Docker packages for deploying. + +Instead, I just stripped away the extra services that I had been working on, and stuck with a simple system to deploy. For the final product, there is the frontend deployed on +Netlify, which connects to the API on Heroku, with talks to the MongoDB Atlas database (in the cloud). + +![Application Architecture Diagram](https://user-images.githubusercontent.com/60903378/179846810-754daa79-2639-4550-a7dd-e0be88ef71bf.png) + +### View the Source Code + +If you wish to view all of the source code for this project, you can look through each repository here: + +- GitHub Organization: [github.com/llama-as-a-service](https://github.com/llama-as-a-service) +- All the GHCR Packages: [github.com/orgs/llama-as-a-service/packages](https://github.com/orgs/llama-as-a-service/packages) +- Frontend - [github.com/llama-as-a-service/frontend](https://github.com/llama-as-a-service/frontend) +- Images API - [github.com/llama-as-a-service/images-service](https://github.com/llama-as-a-service/images-service) +- Authentication API - [github.com/llama-as-a-service/auth-service](https://github.com/llama-as-a-service/auth-service) +- Gateway API - [github.com/llama-as-a-service/gateway-service](https://github.com/llama-as-a-service/gateway-service) + +If you want to have a repository with Node.js, Express, and Docker set up with GitHub Actions, check out the +[boilerplate repository here](https://github.com/llama-as-a-service/express-docker-boilerplate) + +If you are interested in more projects by me, you can check out the [ManyShiba Twitter bot](https://spencerlepine.github.io/blog/manyshiba-the-worlds-greatest-twitter-bot), or more +on my website. + +--- + +Follow my journey or connect with me here: + +- LinkedIn: [/in/spencerlepine](https://www.linkedin.com/in/spencerlepine/) +- Email: [spencer.sayhello@gmail.com](mailto:spencer.sayhello@gmail.com) +- Portfolio: [spencerlepine.com](https://spencerlepine.com) +- https://github.com/spencerlepine +- Twitter: [@spencerlepine](https://twitter.com/spencerlepine) diff --git a/blog/building-llama-as-a-service/index.md b/content/building-llama-as-a-service/index.medium similarity index 96% rename from blog/building-llama-as-a-service/index.md rename to content/building-llama-as-a-service/index.medium index f0b22a7..007f72e 100644 --- a/blog/building-llama-as-a-service/index.md +++ b/content/building-llama-as-a-service/index.medium @@ -1,9 +1,7 @@ --- title: Building Llama as a Service (LaaS) -slug: building-llama-as-a-service -tags: [Express, Docker, MongoDB, Node.js, Heroku, GitHub Actions] -authors: [spencerlepine] -date: 2022-07-12T12:00 +description: Creating the world’s first Llama as a Service, the random image API you didn't realize you needed. +publish_status: "draft" --- ![Blog Post Thumbnail](./thumbnail.jpg) @@ -181,12 +179,13 @@ If you wish to view all of the source code for this project, you can look throug If you want to have a repository with Node.js, Express, and Docker set up with GitHub Actions, check out the [boilerplate repository here](https://github.com/llama-as-a-service/express-docker-boilerplate) -If you are interested in more projects by me, you can check out the [ManyShiba Twitter bot](https://www.spencerlepine.com/blog/manyshiba-the-worlds-greatest-twitter-bot), or more on my website. +If you are interested in more projects by me, you can check out the [ManyShiba Twitter bot](https://spencerlepine.github.io/blog/manyshiba-the-worlds-greatest-twitter-bot), or more on my website. --- -Also find me here: -* [Twitter (@spencerlepine)](https://twitter.com/SpencerLepine) -* [GitHub (@spencerlepine)](https://github.com/spencerlepine) -* [LinkedIn](https://www.linkedin.com/in/spencer-lepine/) -* [YouTube (Spencer Lepine)](https://www.youtube.com/channel/UCBL6vAHJZqUlyJp-rcFU55Q) +Follow my journey or connect with me here: +- LinkedIn: [/in/spencerlepine](https://www.linkedin.com/in/spencerlepine/) +- Email: [spencer.sayhello@gmail.com](mailto:spencer.sayhello@gmail.com) +- Portfolio: [spencerlepine.com](https://spencerlepine.com) +- https://github.com/spencerlepine +- Twitter: [@spencerlepine](https://twitter.com/spencerlepine) diff --git a/blog/building-llama-as-a-service/thumbnail.jpg b/content/building-llama-as-a-service/thumbnail.jpg similarity index 100% rename from blog/building-llama-as-a-service/thumbnail.jpg rename to content/building-llama-as-a-service/thumbnail.jpg diff --git a/blog/creating-custom-git-commands/index.devto b/content/creating-custom-git-commands/index.devto similarity index 84% rename from blog/creating-custom-git-commands/index.devto rename to content/creating-custom-git-commands/index.devto index f0a5105..8c66a32 100644 --- a/blog/creating-custom-git-commands/index.devto +++ b/content/creating-custom-git-commands/index.devto @@ -55,8 +55,9 @@ Use this script or create your own, and follow these steps to set up the custom Viola! This script will accept one command line argument of the destination repo URL. It will automatically open the new project in VSCode in one command. -Also find me here: - - [Twitter](https://twitter.com/SpencerLepine) - - [GitHub](https://github.com/spencerlepine) - - [LinkedIn](https://www.linkedin.com/in/spencer-lepine/) - - [YouTube](https://www.youtube.com/channel/UCBL6vAHJZqUlyJp-rcFU55Q) +Follow my journey or connect with me here: +- LinkedIn: [/in/spencerlepine](https://www.linkedin.com/in/spencerlepine/) +- Email: [spencer.sayhello@gmail.com](mailto:spencer.sayhello@gmail.com) +- Portfolio: [spencerlepine.com](https://spencerlepine.com) +- https://github.com/spencerlepine +- Twitter: [@spencerlepine](https://twitter.com/spencerlepine) diff --git a/blog/creating-custom-git-commands/index.md b/content/creating-custom-git-commands/index.md similarity index 59% rename from blog/creating-custom-git-commands/index.md rename to content/creating-custom-git-commands/index.md index bee8169..617b5cb 100644 --- a/blog/creating-custom-git-commands/index.md +++ b/content/creating-custom-git-commands/index.md @@ -9,22 +9,26 @@ date: 2021-08-15T12:00 ![Blog Post Thumbnail](./thumbnail.jpg) -Every time I clone a repository from GitHub, I always run the same set of commands. This is prone to typos and simply inconvenient. There is a simple solution of combining each step into a single command that automatically runs everything for us. +Every time I clone a repository from GitHub, I always run the same set of commands. This is prone to typos and simply inconvenient. There is a simple solution of combining each +step into a single command that automatically runs everything for us. In this example, I need to clone a GitHub repository, move into the new directory, and then open the project in VSCode. Instead of multiple commands: + ```sh git clone https://github.com/spencerlepine/readme-crawler cd readme-crawler code . ``` + It would great to run one command: + ```sh clone https://github.com/spencerlepine/readme-crawler ``` -To achieve this, we can create a script in the ```~/bin``` directory. Make sure this path matches up with your configuration for the terminal (e.g. ```PATH=$PATH:$HOME/bin```). +To achieve this, we can create a script in the `~/bin` directory. Make sure this path matches up with your configuration for the terminal (e.g. `PATH=$PATH:$HOME/bin`). Let’s create a custom script to combine the git commands. @@ -43,22 +47,24 @@ Let’s create a custom script to combine the git commands. Use this script or create your own, and follow these steps to set up the custom command: -- Navigate to usr/local/bin -> ```cd ~/../../usr/local/bin``` -- Run ```vim clone``` - - *Paste the script* +- Navigate to usr/local/bin -> `cd ~/../../usr/local/bin` +- Run `vim clone` + - _Paste the script_ - Save the file: - - *press ‘ESC’ - - *press ‘SHIFT’ + ‘:’ - - *type ‘wq’ + ENTER + - \*press ‘ESC’ + - \*press ‘SHIFT’ + ‘:’ + - \*type ‘wq’ + ENTER - Create an executable - - ```chmod +x clone``` + - `chmod +x clone` - Run the command! - - ```clone https://github.com/spencerlepine/manyshiba-bot.git``` + - `clone https://github.com/spencerlepine/manyshiba-bot.git` Viola! This script will accept one command line argument of the destination repo URL. It will automatically open the new project in VSCode in one command. -Also find me here: - - [Twitter](https://twitter.com/SpencerLepine) - - [GitHub](https://github.com/spencerlepine) - - [LinkedIn](https://www.linkedin.com/in/spencer-lepine/) - - [YouTube](https://www.youtube.com/channel/UCBL6vAHJZqUlyJp-rcFU55Q) +Follow my journey or connect with me here: + +- LinkedIn: [/in/spencerlepine](https://www.linkedin.com/in/spencerlepine/) +- Email: [spencer.sayhello@gmail.com](mailto:spencer.sayhello@gmail.com) +- Portfolio: [spencerlepine.com](https://spencerlepine.com) +- https://github.com/spencerlepine +- Twitter: [@spencerlepine](https://twitter.com/spencerlepine) diff --git a/blog/creating-custom-git-commands/index.medium b/content/creating-custom-git-commands/index.medium similarity index 83% rename from blog/creating-custom-git-commands/index.medium rename to content/creating-custom-git-commands/index.medium index e5cff8e..f876258 100644 --- a/blog/creating-custom-git-commands/index.medium +++ b/content/creating-custom-git-commands/index.medium @@ -54,8 +54,9 @@ Use this script or create your own, and follow these steps to set up the custom Viola! This script will accept one command line argument of the destination repo URL. It will automatically open the new project in VSCode in one command. -Also find me here: - - [Twitter](https://twitter.com/SpencerLepine) - - [GitHub](https://github.com/spencerlepine) - - [LinkedIn](https://www.linkedin.com/in/spencer-lepine/) - - [YouTube](https://www.youtube.com/channel/UCBL6vAHJZqUlyJp-rcFU55Q) +Follow my journey or connect with me here: +- LinkedIn: [/in/spencerlepine](https://www.linkedin.com/in/spencerlepine/) +- Email: [spencer.sayhello@gmail.com](mailto:spencer.sayhello@gmail.com) +- Portfolio: [spencerlepine.com](https://spencerlepine.com) +- https://github.com/spencerlepine +- Twitter: [@spencerlepine](https://twitter.com/spencerlepine) diff --git a/blog/creating-custom-git-commands/thumbnail.jpg b/content/creating-custom-git-commands/thumbnail.jpg similarity index 100% rename from blog/creating-custom-git-commands/thumbnail.jpg rename to content/creating-custom-git-commands/thumbnail.jpg diff --git a/content/git-project-configuration-with-husky-and-eslint/index.devto b/content/git-project-configuration-with-husky-and-eslint/index.devto new file mode 100644 index 0000000..215642d --- /dev/null +++ b/content/git-project-configuration-with-husky-and-eslint/index.devto @@ -0,0 +1,158 @@ +--- +title: Git Project Configuration With Husky and ESLint +description: Git conventions using pre-commit hooks and enforcing code styles. # Dev.to +tags: 'linter, husky, workflow' # Dev.to (Max 3) +canonical_url: null # Dev.to +--- + +![Blog Post Thumbnail](./thumbnail.jpg) + +Working on a project with Git and GitHub is relatively simple. When a project starts to grow however, it is crucial to write clean code that other developers can read. Follow this article to learn how to set up linting and pre-commit hooks for your repository. + +Let’s walk through the steps for a one-time setup to configure [husky](https://github.com/typicode/husky) pre-commit and pre-push hooks, [ESLint](https://eslint.org/) with code styles conventions, [prettier](https://prettier.io/) code formatter, and [lint-staged](https://github.com/okonet/lint-staged). Husky automatically runs a script on each commit or push. This is useful for linting files to enforce code styles that keeps the entire code base following conventions. + +## Walkthrough + +Install the dependencies: +``` +npm install husky@4.3.8 lint-staged@10.5.4 prettier@2.8.8 --save-dev +``` +``` +yarn add husky@4.3.8 lint-staged@10.5.4 prettier@2.8.8 --dev +``` + +### Package.json Updates + +Add the following to your `package.json` to configure all three packages: + +```json +{ + "name": "@spencer/example-package", + // ... + "scripts": { + "format": "prettier --write ." + }, + "prettier": { + "printWidth": 180, + "tabWidth": 2, + "singleQuote": true, + "semi": true, + "trailingComma": "es5", + "bracketSpacing": true, + "arrowParens": "avoid", + "proseWrap": "always", + "requirePragma": false, + "insertPragma": false, + "endOfLine": "lf", + "jsxBracketSameLine": true + }, + "husky": { + "hooks": { + "pre-commit": "lint-staged" + } + }, + "lint-staged": { + "**/*.(js|jsx|ts|tsx|json|css|md)": [ + "prettier --write" + ] + } +} +``` + +### Configure ESLint (optional) + +First, install this package +``` +npm install eslint-config-prettier +``` + +Then, run `npm init @eslint/config` to create a config file and choose preferred code styles. + +Alternatively, use this example file. In the root directory, create `.eslintrc`: + +```json +{ + "extends": [ + "eslint:recommended" + ], + "plugins": [ + "prettier" + ], + "parserOptions": { + "ecmaVersion": 2017 + }, + "env": { + "es6": true + }, + "rules": { + "no-console": "off", + "no-unused-vars": "off", + "react/prop-types": "off", + "quotes": [ + 2, + "double", + { + "avoidEscape": true + } + ] + } +} +``` + +## Everything in action + +After making changes, commit the files, and see `lint-staged` automatically run, triggered by the pre-commit hook. + +```sh +my-project$ git commit -m 'example commit message' +✔ Preparing lint-staged... +✔ Running tasks for staged files... +✔ Applying modifications from tasks... +✔ Cleaning up temporary files... +[example-branch 4bc4030] add new husky setup + 4 files changed, 59 insertions(+), 44 deletions(-) +``` + +All files have been linted and automatically fixed with `prettier`, or denied if too many errors were thrown. Now we can push the "clean" code. +```sh +my-project$ git push origin example-branch +# npx lint-staged +# ... (no errors found) +# npm test +# ... (PASS) +Enumerating objects: 7, done. +Counting objects: 100% (7/7), done. +Delta compression using up to 8 threads +Compressing objects: 100% (4/4), done. +Writing objects: 100% (4/4), 375 bytes | 375.00 KiB/s, done. +Total 4 (delta 3), reused 0 (delta 0), pack-reused 0 +remote: Resolving deltas: 100% (3/3), completed with 3 local objects. +To https://github.com/spencerlepine/my-project.git + 4bc4030..b558038 example-branch -> example-branch +``` + +## Boilerplate + +See a working example here: [GitHub repository](https://github.com/spencerlepine/husky-boilerplate). + +## Notes + +A useful trick is the `-–no-verify` flag to SKIP the pre-commit or pre-push hook. +Use this option to skip the husky script in case you force a commit/push. + +```sh +my-project$ git push origin my-branch --no-verify +# husky will not run "npm test" +... pushing to GitHub +``` + +When `husky` released v7, it had some major changes to the configuration. There are many articles and Stack Overflow answers about husky, but some of them are misleading if they were using v4. + +Hope this article helped! Interested in more, check out more articles [here](https://spencerlepine.github.io/blog). + +Follow my journey or connect with me here: +- LinkedIn: [/in/spencerlepine](https://www.linkedin.com/in/spencerlepine/) +- Email: [spencer.sayhello@gmail.com](mailto:spencer.sayhello@gmail.com) +- Portfolio: [spencerlepine.com](https://spencerlepine.com) +- https://github.com/spencerlepine +- Twitter: [@spencerlepine](https://twitter.com/spencerlepine) diff --git a/content/git-project-configuration-with-husky-and-eslint/index.md b/content/git-project-configuration-with-husky-and-eslint/index.md new file mode 100644 index 0000000..34e7346 --- /dev/null +++ b/content/git-project-configuration-with-husky-and-eslint/index.md @@ -0,0 +1,160 @@ +--- +title: Git Project Configuration With Husky and ESLint +slug: git-project-configuration-with-husky-and-eslint +tags: [Git, Husky, ESLint, Workflow, GitHub] +authors: [spencerlepine] +date: 2022-03-20T12:00 +--- + +![Blog Post Thumbnail](./thumbnail.jpg) + +Working on a project with Git and GitHub is relatively simple. When a project starts to grow however, it is crucial to write clean code that other developers can read. Follow this +article to learn how to set up linting and pre-commit hooks for your repository. + +Let’s walk through the steps for a one-time setup to configure [husky](https://github.com/typicode/husky) pre-commit and pre-push hooks, [ESLint](https://eslint.org/) with code +styles conventions, [prettier](https://prettier.io/) code formatter, and [lint-staged](https://github.com/okonet/lint-staged). Husky automatically runs a script on each commit or +push. This is useful for linting files to enforce code styles that keeps the entire code base following conventions. + +## Walkthrough + +Install the dependencies: + +``` +npm install husky@4.3.8 lint-staged@10.5.4 prettier@2.8.8 --save-dev +``` + +``` +yarn add husky@4.3.8 lint-staged@10.5.4 prettier@2.8.8 --dev +``` + +### Package.json Updates + +Add the following to your `package.json` to configure all three packages: + +```json +{ + "name": "@spencer/example-package", + // ... + "scripts": { + "format": "prettier --write ." + }, + "prettier": { + "printWidth": 180, + "tabWidth": 2, + "singleQuote": true, + "semi": true, + "trailingComma": "es5", + "bracketSpacing": true, + "arrowParens": "avoid", + "proseWrap": "always", + "requirePragma": false, + "insertPragma": false, + "endOfLine": "lf", + "jsxBracketSameLine": true + }, + "husky": { + "hooks": { + "pre-commit": "lint-staged" + } + }, + "lint-staged": { + "**/*.(js|jsx|ts|tsx|json|css|md)": ["prettier --write"] + } +} +``` + +### Configure ESLint (optional) + +First, install this package + +``` +npm install eslint-config-prettier +``` + +Then, run `npm init @eslint/config` to create a config file and choose preferred code styles. + +Alternatively, use this example file. In the root directory, create `.eslintrc`: + +```json +{ + "extends": ["eslint:recommended"], + "plugins": ["prettier"], + "parserOptions": { + "ecmaVersion": 2017 + }, + "env": { + "es6": true + }, + "rules": { + "no-console": "off", + "no-unused-vars": "off", + "react/prop-types": "off", + "quotes": [ + 2, + "double", + { + "avoidEscape": true + } + ] + } +} +``` + +## Everything in action + +After making changes, commit the files, and see `lint-staged` automatically run, triggered by the pre-commit hook. + +```sh +my-project$ git commit -m 'example commit message' +✔ Preparing lint-staged... +✔ Running tasks for staged files... +✔ Applying modifications from tasks... +✔ Cleaning up temporary files... +[example-branch 4bc4030] add new husky setup + 4 files changed, 59 insertions(+), 44 deletions(-) +``` + +All files have been linted and automatically fixed with `prettier`, or denied if too many errors were thrown. Now we can push the "clean" code. + +```sh +my-project$ git push origin example-branch +# npx lint-staged +# ... (no errors found) +# npm test +# ... (PASS) +Enumerating objects: 7, done. +Counting objects: 100% (7/7), done. +Delta compression using up to 8 threads +Compressing objects: 100% (4/4), done. +Writing objects: 100% (4/4), 375 bytes | 375.00 KiB/s, done. +Total 4 (delta 3), reused 0 (delta 0), pack-reused 0 +remote: Resolving deltas: 100% (3/3), completed with 3 local objects. +To https://github.com/spencerlepine/my-project.git + 4bc4030..b558038 example-branch -> example-branch +``` + +## Boilerplate + +See a working example here: [GitHub repository](https://github.com/spencerlepine/husky-boilerplate). + +## Notes + +A useful trick is the `-–no-verify` flag to SKIP the pre-commit or pre-push hook. Use this option to skip the husky script in case you force a commit/push. + +```sh +my-project$ git push origin my-branch --no-verify +# husky will not run "npm test" +... pushing to GitHub +``` + +When `husky` released v7, it had some major changes to the configuration. There are many articles and Stack Overflow answers about husky, but some of them are misleading if they +were using v4. + +Hope this article helped! Interested in more, check out more articles [here](https://spencerlepine.github.io/blog). + +Also find me here: + +- [Twitter](https://twitter.com/SpencerLepine) +- [GitHub](https://github.com/spencerlepine) +- [LinkedIn](https://www.linkedin.com/in/spencerlepine/) +- [YouTube](https://www.youtube.com/channel/UCBL6vAHJZqUlyJp-rcFU55Q) diff --git a/blog/git-project-configuration-with-husky-and-eslint/git-project-configuration-with-husky-and-eslint.md b/content/git-project-configuration-with-husky-and-eslint/index.medium similarity index 87% rename from blog/git-project-configuration-with-husky-and-eslint/git-project-configuration-with-husky-and-eslint.md rename to content/git-project-configuration-with-husky-and-eslint/index.medium index 5589650..76c90ec 100644 --- a/blog/git-project-configuration-with-husky-and-eslint/git-project-configuration-with-husky-and-eslint.md +++ b/content/git-project-configuration-with-husky-and-eslint/index.medium @@ -1,9 +1,7 @@ --- title: Git Project Configuration With Husky and ESLint -slug: git-project-configuration-with-husky-and-eslint -tags: [Git, Husky, ESLint, Workflow, GitHub] -authors: [spencerlepine] -date: 2022-03-20T12:00 +description: Git conventions using pre-commit hooks and enforcing code styles. +publish_status: "draft" --- ![Blog Post Thumbnail](./thumbnail.jpg) @@ -33,7 +31,7 @@ Add the following to your `package.json` to configure all three packages: "scripts": { "format": "prettier --write ." }, - "prettier": { + "prettier": { "printWidth": 180, "tabWidth": 2, "singleQuote": true, @@ -60,7 +58,7 @@ Add the following to your `package.json` to configure all three packages: } ``` -## Configure ESLint (optional) +### Configure ESLint (optional) First, install this package ``` @@ -100,7 +98,7 @@ Alternatively, use this example file. In the root directory, create `.eslintrc`: } ``` -#### Everything in action +## Everything in action After making changes, commit the files, and see `lint-staged` automatically run, triggered by the pre-commit hook. @@ -132,12 +130,15 @@ To https://github.com/spencerlepine/my-project.git 4bc4030..b558038 example-branch -> example-branch ``` -#### Boilerplate +## Boilerplate + See a working example here: [GitHub repository](https://github.com/spencerlepine/husky-boilerplate). -#### Notes +## Notes + A useful trick is the `-–no-verify` flag to SKIP the pre-commit or pre-push hook. Use this option to skip the husky script in case you force a commit/push. + ```sh my-project$ git push origin my-branch --no-verify # husky will not run "npm test" @@ -146,10 +147,11 @@ my-project$ git push origin my-branch --no-verify When `husky` released v7, it had some major changes to the configuration. There are many articles and Stack Overflow answers about husky, but some of them are misleading if they were using v4. -Hope this article helped! Interested in more, check out more articles [here](https://www.spencerlepine.com/blog). +Hope this article helped! Interested in more, check out more articles [here](https://spencerlepine.github.io/blog). -Also find me here: - - [Twitter](https://twitter.com/SpencerLepine) - - [GitHub](https://github.com/spencerlepine) - - [LinkedIn](https://www.linkedin.com/in/spencer-lepine/) - - [YouTube](https://www.youtube.com/channel/UCBL6vAHJZqUlyJp-rcFU55Q) \ No newline at end of file +Follow my journey or connect with me here: +- LinkedIn: [/in/spencerlepine](https://www.linkedin.com/in/spencerlepine/) +- Email: [spencer.sayhello@gmail.com](mailto:spencer.sayhello@gmail.com) +- Portfolio: [spencerlepine.com](https://spencerlepine.com) +- https://github.com/spencerlepine +- Twitter: [@spencerlepine](https://twitter.com/spencerlepine) diff --git a/blog/git-project-configuration-with-husky-and-eslint/thumbnail.jpg b/content/git-project-configuration-with-husky-and-eslint/thumbnail.jpg similarity index 100% rename from blog/git-project-configuration-with-husky-and-eslint/thumbnail.jpg rename to content/git-project-configuration-with-husky-and-eslint/thumbnail.jpg diff --git a/content/manyshiba-the-worlds-greatest-twitter-bot/index.devto b/content/manyshiba-the-worlds-greatest-twitter-bot/index.devto new file mode 100644 index 0000000..15ae378 --- /dev/null +++ b/content/manyshiba-the-worlds-greatest-twitter-bot/index.devto @@ -0,0 +1,160 @@ +--- +title: ManyShiba - The World's Greatest Twitter Bot +description: Exploration of building a custom Twitter Bot. # Dev.to +tags: 'twitter, nodejs, bot' # Dev.to (Max 3) +canonical_url: null # Dev.to +--- + +![Blog Post Thumbnail](./thumbnail.jpg) + +ManyShiba is the greatest Twitter bot ever created. Bless your soul with a daily photo of the almighty Shiba. Be uplifted by the spirit of this holy and sacred creature to free your soul. + +For so long, I felt that something was missing in my life. After being blessed by the presence of a divine Shiba Inu dog, I had my answer. I could cleanse my soul each day by reminding myself of this divine being. But that couldn’t be it, there had to be some way I could bless EVERYONE. + +Behold - your new favorite Twitter bot - [@ManyShiba](https://twitter.com/manyshiba). (source code: [GitHub](https://github.com/spencerlepine/manyshiba-bot)) + +### So what exactly is the ManyShiba Bot? + +This is a simple Node.js app connected to the Twitter API. The app is deployed on [Heroku](https://dashboard.heroku.com/) and connected to the Twitter developer account. + +Each time the script runs, a new dog image will be fetched from the [Shibe.online](https://shibe.online/) API. That image will then be uploaded and posted on the Twitter feed. + +### Technologies: +To build this app, there were three technologies I worked with. + + - [Node.js](https://nodejs.org/) + - [Shibe.online API](https://shibe.online/) + - [twit](https://www.npmjs.com/package/twit) + +Node is a popular Javascript runtime environment that can easily run on Heroku. Heroku is a PaaS and great tool to deploy a small app for free. + +The Shibe.online API is a third party service to retrieve a link for dog pictures. Since there are many random photos to use in that database, it is the perfect resource for finding many new photos. + +Finally, the twit library is a Twitter API Client for Node that simplifies the REST requests. Since this app will only be posting tweets, there are on advanced requests being made to the Twitter API. + +With each of these tools, we can have a functioning Twitter bot. Here are the steps for code: + +- Save the Twitter API configuration +- Initialize the Twit Client with the configuration +- Fetch a random image from Shibe.online +- Convert the image from a URL to base64 +- Tweet the image + +After registering a [Twitter App](https://developer.twitter.com/), make sure to enable Read/Write permissions in the App settings. Create an `.env` file in the root of the project based on `.env.example`. We can use this data in our file with an object like this: + +```js +const config = { + consumer_key: process.env.TWITTER_API_KEY, + consumer_secret: process.env.TWITTER_API_SECRET, + access_token: process.env.TWITTER_API_ACCESS_TOKEN, + access_token_secret: process.env.TWITTER_API_ACCESS_TOKEN_SECRET, +} +``` + +In `app.js` we can import `twit` and pass along the config obj: + +```js +import twit from 'twit'; +... +const twitterClient = new twit(config); +``` + +Before we tweet anything, we first need to generate the content to post. This is where we will retrieve an image from the Shibe.online API. Note, the Shibe endpoint will return a list of image URLs stored on a third party server. We must download this image, because posting an image URL does not actually display it on the feed. + +```js +const API_ENDPOINT = 'http://shibe.online/api/shibes?count=1&urls=true&httpsUrls=false'; +... +const fetchRandomImage = async (tweetFunction) => { + const resultList = await fetch(API_ENDPOINT).then(res => res.json()); + const newImage = resultList[0]; + return newImage +} +``` + +After retrieving a URL from Shibe.online, we must fetch the image as well. We can convert the data from the image URL and convert it to a base64 string in memory. Since we are saving the image data, a Tweet will always load the image since it does not depend on the third party image database anymore. + +Note, you can use any library for HTTP requests like [axios](https://axios-http.com/). This example uses the `http` and `node-fetch` libraries available on [npm](https://www.npmjs.com/). + +```js +import fetch from 'node-fetch'; +import http from 'http'; +... +const urlToBase64 = async (imgUrl, tweetFunction) => { + await http.get(imgUrl, async (httpRes) => { + httpRes.setEncoding('base64'); + // Exclude -> "data:" + httpRes.headers["content-type"] + ";base64,"; + let body = ""; + await httpRes.on('data', (data) => { + body += data; + }); + await httpRes.on('end', () => { + tweetFunction(body); + }); + }).on('error', (e) => { + console.log(`Got error: ${e.message}`); + }); +} + +const fetchRandomImage = async (tweetFunction) => { + ... + await urlToBase64(newImage, tweetFunction); +} +``` + +With a base64 string, we need to upload it as media context to Twitter. After uploading it, we have access to a `media_id`. This media_id can be attached to the actual Tweet, which will cause the image to render on the feed. For this project, there is no text context attached to this Tweet. + +```js +... +const tweetImage = (tweetContent) => { + if (tweetContent) { + twitterClient.post('media/upload', { media_data: tweetContent }, function (err, data, response) { + // now we can assign alt text to the media, for use by screen readers and + // other text-based presentations and interpreters + var mediaIdStr = data.media_id_string + var altText = "Shiba Inu" + var meta_params = { media_id: mediaIdStr, alt_text: { text: altText } } + + twitterClient.post('media/metadata/create', meta_params, function (err, data, response) { + if (!err) { + // now we can reference the media and post a tweet (media will attach to the tweet) + var params = { status: '', media_ids: [mediaIdStr] } + + twitterClient.post('statuses/update', params, function (err, data, response) { + console.log(data) + }) + } + }) + }) + } +} + +fetchRandomImage(tweetImage); +``` + +With each step tied together, we can retrieve Shiba images and generate Tweets with media content. To see the source code, head over to the [GitHub repository](https://github.com/spencerlepine/manyshiba-bot). + +With a working Twitter bot, I could run the script with Node on my local machine. However, it wouldn’t be automated if I had to run it manually. To solve this, I decided to deploy everything onto Heroku. This service allows node servers to run not just simple static files. + +With [Heroku Scheduler](https://devcenter.heroku.com/articles/scheduler), you can configure script executions. Make sure to add this script to your `package.json` file: + +```js +{ + ... + “scripts”: { + “start”: “node app.js” + } +} +```` + +Adding a setting to execute the script on a timer makes the bot automated. I decided to let the bot create a daily Tweet with this tool. Our ManyShiba bot is now fully functional! + +--- + +View the source code on [GitHub](https://github.com/spencerlepine/manyshiba-bot). + +Follow my journey or connect with me here: +- LinkedIn: [/in/spencerlepine](https://www.linkedin.com/in/spencerlepine/) +- Email: [spencer.sayhello@gmail.com](mailto:spencer.sayhello@gmail.com) +- Portfolio: [spencerlepine.com](https://spencerlepine.com) +- https://github.com/spencerlepine +- Twitter: [@spencerlepine](https://twitter.com/spencerlepine) diff --git a/content/manyshiba-the-worlds-greatest-twitter-bot/index.md b/content/manyshiba-the-worlds-greatest-twitter-bot/index.md new file mode 100644 index 0000000..6ac647e --- /dev/null +++ b/content/manyshiba-the-worlds-greatest-twitter-bot/index.md @@ -0,0 +1,174 @@ +--- +title: ManyShiba - The World's Greatest Twitter Bot +slug: manyshiba-the-worlds-greatest-twitter-bot +tags: [GitHub, Twitter, NodeJS. Bot] +authors: [spencerlepine] +date: 2021-09-01T12:00 +--- + +![Blog Post Thumbnail](./thumbnail.jpg) + +ManyShiba is the greatest Twitter bot ever created. Bless your soul with a daily photo of the almighty Shiba. Be uplifted by the spirit of this holy and sacred creature to free +your soul. + +For so long, I felt that something was missing in my life. After being blessed by the presence of a divine Shiba Inu dog, I had my answer. I could cleanse my soul each day by +reminding myself of this divine being. But that couldn’t be it, there had to be some way I could bless EVERYONE. + +Behold - your new favorite Twitter bot - [@ManyShiba](https://twitter.com/manyshiba). (source code: [GitHub](https://github.com/spencerlepine/manyshiba-bot)) + +### So what exactly is the ManyShiba Bot? + +This is a simple Node.js app connected to the Twitter API. The app is deployed on [Heroku](https://dashboard.heroku.com/) and connected to the Twitter developer account. + +Each time the script runs, a new dog image will be fetched from the [Shibe.online](https://shibe.online/) API. That image will then be uploaded and posted on the Twitter feed. + +### Technologies: + +To build this app, there were three technologies I worked with. + +- [Node.js](https://nodejs.org/) +- [Shibe.online API](https://shibe.online/) +- [twit](https://www.npmjs.com/package/twit) + +Node is a popular Javascript runtime environment that can easily run on Heroku. Heroku is a PaaS and great tool to deploy a small app for free. + +The Shibe.online API is a third party service to retrieve a link for dog pictures. Since there are many random photos to use in that database, it is the perfect resource for +finding many new photos. + +Finally, the twit library is a Twitter API Client for Node that simplifies the REST requests. Since this app will only be posting tweets, there are on advanced requests being made +to the Twitter API. + +With each of these tools, we can have a functioning Twitter bot. Here are the steps for code: + +- Save the Twitter API configuration +- Initialize the Twit Client with the configuration +- Fetch a random image from Shibe.online +- Convert the image from a URL to base64 +- Tweet the image + +After registering a [Twitter App](https://developer.twitter.com/), make sure to enable Read/Write permissions in the App settings. Create an `.env` file in the root of the project +based on `.env.example`. We can use this data in our file with an object like this: + +```js +const config = { + consumer_key: process.env.TWITTER_API_KEY, + consumer_secret: process.env.TWITTER_API_SECRET, + access_token: process.env.TWITTER_API_ACCESS_TOKEN, + access_token_secret: process.env.TWITTER_API_ACCESS_TOKEN_SECRET, +}; +``` + +In `app.js` we can import `twit` and pass along the config obj: + +```js +import twit from 'twit'; +... +const twitterClient = new twit(config); +``` + +Before we tweet anything, we first need to generate the content to post. This is where we will retrieve an image from the Shibe.online API. Note, the Shibe endpoint will return a +list of image URLs stored on a third party server. We must download this image, because posting an image URL does not actually display it on the feed. + +```js +const API_ENDPOINT = 'http://shibe.online/api/shibes?count=1&urls=true&httpsUrls=false'; +... +const fetchRandomImage = async (tweetFunction) => { + const resultList = await fetch(API_ENDPOINT).then(res => res.json()); + const newImage = resultList[0]; + return newImage +} +``` + +After retrieving a URL from Shibe.online, we must fetch the image as well. We can convert the data from the image URL and convert it to a base64 string in memory. Since we are +saving the image data, a Tweet will always load the image since it does not depend on the third party image database anymore. + +Note, you can use any library for HTTP requests like [axios](https://axios-http.com/). This example uses the `http` and `node-fetch` libraries available on +[npm](https://www.npmjs.com/). + +```js +import fetch from 'node-fetch'; +import http from 'http'; +... +const urlToBase64 = async (imgUrl, tweetFunction) => { + await http.get(imgUrl, async (httpRes) => { + httpRes.setEncoding('base64'); + // Exclude -> "data:" + httpRes.headers["content-type"] + ";base64,"; + let body = ""; + await httpRes.on('data', (data) => { + body += data; + }); + await httpRes.on('end', () => { + tweetFunction(body); + }); + }).on('error', (e) => { + console.log(`Got error: ${e.message}`); + }); +} + +const fetchRandomImage = async (tweetFunction) => { + ... + await urlToBase64(newImage, tweetFunction); +} +``` + +With a base64 string, we need to upload it as media context to Twitter. After uploading it, we have access to a `media_id`. This media_id can be attached to the actual Tweet, which +will cause the image to render on the feed. For this project, there is no text context attached to this Tweet. + +```js +... +const tweetImage = (tweetContent) => { + if (tweetContent) { + twitterClient.post('media/upload', { media_data: tweetContent }, function (err, data, response) { + // now we can assign alt text to the media, for use by screen readers and + // other text-based presentations and interpreters + var mediaIdStr = data.media_id_string + var altText = "Shiba Inu" + var meta_params = { media_id: mediaIdStr, alt_text: { text: altText } } + + twitterClient.post('media/metadata/create', meta_params, function (err, data, response) { + if (!err) { + // now we can reference the media and post a tweet (media will attach to the tweet) + var params = { status: '', media_ids: [mediaIdStr] } + + twitterClient.post('statuses/update', params, function (err, data, response) { + console.log(data) + }) + } + }) + }) + } +} + +fetchRandomImage(tweetImage); +``` + +With each step tied together, we can retrieve Shiba images and generate Tweets with media content. To see the source code, head over to the +[GitHub repository](https://github.com/spencerlepine/manyshiba-bot). + +With a working Twitter bot, I could run the script with Node on my local machine. However, it wouldn’t be automated if I had to run it manually. To solve this, I decided to deploy +everything onto Heroku. This service allows node servers to run not just simple static files. + +With [Heroku Scheduler](https://devcenter.heroku.com/articles/scheduler), you can configure script executions. Make sure to add this script to your `package.json` file: + +```js +{ + ... + “scripts”: { + “start”: “node app.js” + } +} +``` + +Adding a setting to execute the script on a timer makes the bot automated. I decided to let the bot create a daily Tweet with this tool. Our ManyShiba bot is now fully functional! + +--- + +View the source code on [GitHub](https://github.com/spencerlepine/manyshiba-bot). + +Follow my journey or connect with me here: + +- LinkedIn: [/in/spencerlepine](https://www.linkedin.com/in/spencerlepine/) +- Email: [spencer.sayhello@gmail.com](mailto:spencer.sayhello@gmail.com) +- Portfolio: [spencerlepine.com](https://spencerlepine.com) +- https://github.com/spencerlepine +- Twitter: [@spencerlepine](https://twitter.com/spencerlepine) diff --git a/blog/manyshiba-the-worlds-greatest-twitter-bot/manyshiba-the-worlds-greatest-twitter-bot.md b/content/manyshiba-the-worlds-greatest-twitter-bot/index.medium similarity index 94% rename from blog/manyshiba-the-worlds-greatest-twitter-bot/manyshiba-the-worlds-greatest-twitter-bot.md rename to content/manyshiba-the-worlds-greatest-twitter-bot/index.medium index 96d8895..9a468a8 100644 --- a/blog/manyshiba-the-worlds-greatest-twitter-bot/manyshiba-the-worlds-greatest-twitter-bot.md +++ b/content/manyshiba-the-worlds-greatest-twitter-bot/index.medium @@ -1,9 +1,7 @@ --- title: ManyShiba - The World's Greatest Twitter Bot -slug: manyshiba-the-worlds-greatest-twitter-bot -tags: [GitHub, Twitter, Node.js. Bot] -authors: [spencerlepine] -date: 2021-09-01T12:00 +description: Exploration of building a custom Twitter Bot. +publish_status: "draft" --- ![Blog Post Thumbnail](./thumbnail.jpg) @@ -153,8 +151,9 @@ Adding a setting to execute the script on a timer makes the bot automated. I dec View the source code on [GitHub](https://github.com/spencerlepine/manyshiba-bot). -Also find me here: -* [Twitter (@spencerlepine)](https://twitter.com/SpencerLepine) -* [GitHub (@spencerlepine)](https://github.com/spencerlepine) -* [LinkedIn](https://www.linkedin.com/in/spencer-lepine/) -* [YouTube (Spencer Lepine)](https://www.youtube.com/channel/UCBL6vAHJZqUlyJp-rcFU55Q) +Follow my journey or connect with me here: +- LinkedIn: [/in/spencerlepine](https://www.linkedin.com/in/spencerlepine/) +- Email: [spencer.sayhello@gmail.com](mailto:spencer.sayhello@gmail.com) +- Portfolio: [spencerlepine.com](https://spencerlepine.com) +- https://github.com/spencerlepine +- Twitter: [@spencerlepine](https://twitter.com/spencerlepine) diff --git a/blog/manyshiba-the-worlds-greatest-twitter-bot/thumbnail.jpg b/content/manyshiba-the-worlds-greatest-twitter-bot/thumbnail.jpg similarity index 100% rename from blog/manyshiba-the-worlds-greatest-twitter-bot/thumbnail.jpg rename to content/manyshiba-the-worlds-greatest-twitter-bot/thumbnail.jpg diff --git a/content/portfolio-site-continuous-integration-github-action/index.devto b/content/portfolio-site-continuous-integration-github-action/index.devto new file mode 100644 index 0000000..4825b0b --- /dev/null +++ b/content/portfolio-site-continuous-integration-github-action/index.devto @@ -0,0 +1,103 @@ +--- +title: Portfolio Site Continuous Integration GitHub Action +description: Automating steps to deploy static files for my personal website. # Dev.to +tags: 'github, automation, repository' # Dev.to (Max 3) +canonical_url: null # Dev.to +--- + +![Blog Post Thumbnail](./thumbnail.jpg) + +After learning about GatsbyJS and building a static Portfolio site and blog, I searched for systems to deploy this website. At this point, I had purchased the domain name through AWS Route53, but I still needed somewhere to host the static files. + +I chose to deploy the site through Digital Ocean Droplet. This was a remote Ubuntu virtual machine with an IP address I could route to. Once I installed apache web server software and connected domain name, the website was live. + +There was still one problem with this deployment process. Updating the website took several steps. After local development, I needed to build/generate the static files with Gatbsy locally, and push them to the GitHub repo. Then, I would ssh into the Ubuntu Droplet, clone the updated repo again, and replace the static files for apache to serve. + +Steps to deploy were repetitive. Having to remember terminal commands and finding passwords was inconvenient. I was unable to build the static files on the remote Ubuntu machine with limited hardware specs. + +One improvement I made was writing a script to delete and copy new files when deploying on Digital Ocean. This addition did not solve everything, as I hard-coded my github username and repository name. + +```sh +# RUN the script: +# sudo ./syncBuild.sh portfolio-site + +GITHUB_LINK="https://github.com/spencerlepine/$1.git" + +git clone "$GITHUB_LINK" +echo -e "$GREEN successfully cloned repo$NC \n" + +echo "Removing the current public folder" +rm -r -d public +echo "Moving into the github repo folder" +cd $1 +echo "Moving public folder contents OUT of repo folder" +mv public .. +echo -e "$GREEN Successfully copy news files $NC \n" +echo "Moving back into parent directory" +cd .. + +echo "Deleting leftover github repo files" +rm -d -r "$1" + +echo "Restarting apache server" +systemctl restart apache2 + +echo -e "$GREEN Public folder sync complete! $NC" + +echo "Visit: spencerlepine.com" + +# "syncBuild.sh" 52L, 956C +``` +Although this process took less than 10 minutes, switching between the IDE, GitHub, the terminal, and the browser was annoying. It would be better to automate this process. To do that, we can use a GitHub action that will trigger on every repository update. A handy feature of GitHub repositories is the ability to store secrets/environment variables. We can use this to store passwords directly connected to the repository, so all credentials needed for the project are stored in one place. + + Let’s create the GitHub workflow file: +``` +name: CI + +on: + push: + branches: [ master ] + +jobs: + + deploy: + name: Deploy + runs-on: ubuntu-latest + steps: + - name: Deploy Static Files + uses: appleboy/ssh-action@master + with: + host: ${{ secrets.HOST }} + username: ${{ secrets.USERNAME }} + key: ${{ secrets.KEY }} + passphrase: ${{ secrets.PASSPHRASE }} + port: ${{ secrets.PORT }} + script_stop: true + script: | + # change directory to where website files are stored + # clone the repository + # remove the current public folder w/ static files + # enter the repo folder + # extract the public folder from the repo folder + # remove the leftover GitHub repo files + # restart the web server + echo “Visit deployed site: spencerlepine.com” +``` + +### Conclusion +This GitHub action will use another [ssh-action](https://github.com/appleboy/ssh-action) action to handle the remote ssh connection. After storing the connection credentials in the GitHub repository secrets, this can securely/dynamically connect to the remote Ubuntu machine. The last key for this action job is the script, or verbatim Ubuntu commands that will be run. For the sake of brevity, I have only written pseudo-code for the deployment steps. + +With this file saved in the `.github` folder in the project, GitHub can execute the action and connect to our remote server autonomously. After adding a blog post or updating the website, the only step required is pushing the code to GitHub (which I would do anyways). Now, the GitHub workflow will handle all of the steps to connect to the host server, delete old static files, and download the fresh static files. + +The continuous integration for this website is completely automated now. This saves me time and effort. No need to worry about forgetting how to deploy later on. + +--- + +View the source code on [GitHub](https://github.com/spencerlepine/spencerlepine.com). + +Follow my journey or connect with me here: +- LinkedIn: [/in/spencerlepine](https://www.linkedin.com/in/spencerlepine/) +- Email: [spencer.sayhello@gmail.com](mailto:spencer.sayhello@gmail.com) +- Portfolio: [spencerlepine.com](https://spencerlepine.com) +- https://github.com/spencerlepine +- Twitter: [@spencerlepine](https://twitter.com/spencerlepine) diff --git a/content/portfolio-site-continuous-integration-github-action/index.md b/content/portfolio-site-continuous-integration-github-action/index.md new file mode 100644 index 0000000..74280c5 --- /dev/null +++ b/content/portfolio-site-continuous-integration-github-action/index.md @@ -0,0 +1,119 @@ +--- +title: Portfolio Site Continuous Integration GitHub Action +slug: portfolio-site-continuous-integration-github-action +tags: [GitHub, Actions, Repositories, ssh, Ubuntu, DigitalOcean] +authors: [spencerlepine] +date: 2021-10-11T12:00 +--- + +![Blog Post Thumbnail](./thumbnail.jpg) + +After learning about GatsbyJS and building a static Portfolio site and blog, I searched for systems to deploy this website. At this point, I had purchased the domain name through +AWS Route53, but I still needed somewhere to host the static files. + +I chose to deploy the site through Digital Ocean Droplet. This was a remote Ubuntu virtual machine with an IP address I could route to. Once I installed apache web server software +and connected domain name, the website was live. + +There was still one problem with this deployment process. Updating the website took several steps. After local development, I needed to build/generate the static files with Gatbsy +locally, and push them to the GitHub repo. Then, I would ssh into the Ubuntu Droplet, clone the updated repo again, and replace the static files for apache to serve. + +Steps to deploy were repetitive. Having to remember terminal commands and finding passwords was inconvenient. I was unable to build the static files on the remote Ubuntu machine +with limited hardware specs. + +One improvement I made was writing a script to delete and copy new files when deploying on Digital Ocean. This addition did not solve everything, as I hard-coded my github username +and repository name. + +```sh +# RUN the script: +# sudo ./syncBuild.sh portfolio-site + +GITHUB_LINK="https://github.com/spencerlepine/$1.git" + +git clone "$GITHUB_LINK" +echo -e "$GREEN successfully cloned repo$NC \n" + +echo "Removing the current public folder" +rm -r -d public +echo "Moving into the github repo folder" +cd $1 +echo "Moving public folder contents OUT of repo folder" +mv public .. +echo -e "$GREEN Successfully copy news files $NC \n" +echo "Moving back into parent directory" +cd .. + +echo "Deleting leftover github repo files" +rm -d -r "$1" + +echo "Restarting apache server" +systemctl restart apache2 + +echo -e "$GREEN Public folder sync complete! $NC" + +echo "Visit: spencerlepine.com" + +# "syncBuild.sh" 52L, 956C +``` + +Although this process took less than 10 minutes, switching between the IDE, GitHub, the terminal, and the browser was annoying. It would be better to automate this process. To do +that, we can use a GitHub action that will trigger on every repository update. A handy feature of GitHub repositories is the ability to store secrets/environment variables. We can +use this to store passwords directly connected to the repository, so all credentials needed for the project are stored in one place. + + Let’s create the GitHub workflow file: + +``` +name: CI + +on: + push: + branches: [ master ] + +jobs: + + deploy: + name: Deploy + runs-on: ubuntu-latest + steps: + - name: Deploy Static Files + uses: appleboy/ssh-action@master + with: + host: ${{ secrets.HOST }} + username: ${{ secrets.USERNAME }} + key: ${{ secrets.KEY }} + passphrase: ${{ secrets.PASSPHRASE }} + port: ${{ secrets.PORT }} + script_stop: true + script: | + # change directory to where website files are stored + # clone the repository + # remove the current public folder w/ static files + # enter the repo folder + # extract the public folder from the repo folder + # remove the leftover GitHub repo files + # restart the web server + echo “Visit deployed site: spencerlepine.com” +``` + +### Conclusion + +This GitHub action will use another [ssh-action](https://github.com/appleboy/ssh-action) action to handle the remote ssh connection. After storing the connection credentials in the +GitHub repository secrets, this can securely/dynamically connect to the remote Ubuntu machine. The last key for this action job is the script, or verbatim Ubuntu commands that will +be run. For the sake of brevity, I have only written pseudo-code for the deployment steps. + +With this file saved in the `.github` folder in the project, GitHub can execute the action and connect to our remote server autonomously. After adding a blog post or updating the +website, the only step required is pushing the code to GitHub (which I would do anyways). Now, the GitHub workflow will handle all of the steps to connect to the host server, +delete old static files, and download the fresh static files. + +The continuous integration for this website is completely automated now. This saves me time and effort. No need to worry about forgetting how to deploy later on. + +--- + +View the source code on [GitHub](https://github.com/spencerlepine/spencerlepine.com). + +Follow my journey or connect with me here: + +- LinkedIn: [/in/spencerlepine](https://www.linkedin.com/in/spencerlepine/) +- Email: [spencer.sayhello@gmail.com](mailto:spencer.sayhello@gmail.com) +- Portfolio: [spencerlepine.com](https://spencerlepine.com) +- https://github.com/spencerlepine +- Twitter: [@spencerlepine](https://twitter.com/spencerlepine) diff --git a/blog/portfolio-site-continuous-integration-github-action/portfolio-site-continuous-integration-github-action.md b/content/portfolio-site-continuous-integration-github-action/index.medium similarity index 90% rename from blog/portfolio-site-continuous-integration-github-action/portfolio-site-continuous-integration-github-action.md rename to content/portfolio-site-continuous-integration-github-action/index.medium index 5678fc4..682a237 100644 --- a/blog/portfolio-site-continuous-integration-github-action/portfolio-site-continuous-integration-github-action.md +++ b/content/portfolio-site-continuous-integration-github-action/index.medium @@ -1,9 +1,7 @@ --- title: Portfolio Site Continuous Integration GitHub Action -slug: portfolio-site-continuous-integration-github-action -tags: [GitHub, Actions, Repositories, ssh, Ubuntu, DigitalOcean] -authors: [spencerlepine] -date: 2021-10-11T12:00 +description: Automating steps to deploy static files for my personal website. +publish_status: "draft" --- ![Blog Post Thumbnail](./thumbnail.jpg) @@ -96,8 +94,9 @@ The continuous integration for this website is completely automated now. This sa View the source code on [GitHub](https://github.com/spencerlepine/spencerlepine.com). -Also find me here: -* [Twitter (@spencerlepine)](https://twitter.com/SpencerLepine) -* [GitHub (@spencerlepine)](https://github.com/spencerlepine) -* [LinkedIn](https://www.linkedin.com/in/spencer-lepine/) -* [YouTube (Spencer Lepine)](https://www.youtube.com/channel/UCBL6vAHJZqUlyJp-rcFU55Q) +Follow my journey or connect with me here: +- LinkedIn: [/in/spencerlepine](https://www.linkedin.com/in/spencerlepine/) +- Email: [spencer.sayhello@gmail.com](mailto:spencer.sayhello@gmail.com) +- Portfolio: [spencerlepine.com](https://spencerlepine.com) +- https://github.com/spencerlepine +- Twitter: [@spencerlepine](https://twitter.com/spencerlepine) diff --git a/blog/portfolio-site-continuous-integration-github-action/thumbnail.jpg b/content/portfolio-site-continuous-integration-github-action/thumbnail.jpg similarity index 100% rename from blog/portfolio-site-continuous-integration-github-action/thumbnail.jpg rename to content/portfolio-site-continuous-integration-github-action/thumbnail.jpg diff --git a/blog/preparing-for-my-amazon-front-end-engineer-interview/preparing-for-my-amazon-front-end-engineer-interview.md b/content/preparing-for-my-amazon-front-end-engineer-interview/index.devto similarity index 90% rename from blog/preparing-for-my-amazon-front-end-engineer-interview/preparing-for-my-amazon-front-end-engineer-interview.md rename to content/preparing-for-my-amazon-front-end-engineer-interview/index.devto index 1b3def2..2a8b1ee 100644 --- a/blog/preparing-for-my-amazon-front-end-engineer-interview/preparing-for-my-amazon-front-end-engineer-interview.md +++ b/content/preparing-for-my-amazon-front-end-engineer-interview/index.devto @@ -1,9 +1,8 @@ --- title: Preparing for My Amazon Front End Engineer Interview -slug: preparing-for-my-amazon-front-end-engineer-interview -tags: [Job Search, Amazon, AWS, LeetCode, Interviews] -authors: [spencerlepine] -date: 2022-11-20T12:00 +description: My experience and advice for the Amazon Front End Engineer interview. +tags: 'job search, interview, amazon' # Dev.to (Max 3) +canonical_url: null # Dev.to --- ![Blog Post Thumbnail](./thumbnail.jpg) @@ -12,7 +11,7 @@ After my recent interviews with Amazon for the Front End Engineer role, I though TL;DR - practice LeetCode, practice vanilla JS, always verbalize your thought process, prepare stories to share in the STAR framework, and be confident (or fake it). -If you are interested in general advice for landing your first role in tech, I wrote another article about that: [How I Became a Software Engineer at 20 With No CS Degree](https://sppencerlepine.com/blog-TODO) +If you are interested in general advice for landing your first role in tech, I wrote another article about that: [How I Became a Software Engineer at 20 With No CS Degree](https://spencerlepine.github.io/blog/todo) For the Amazon FEE interviews, you can find a lot of material online, since thousands of people are applying to these roles. In this article I will share the resources that helped me and give advice on what to study. @@ -68,10 +67,12 @@ I will not go into detail about the specific problems I was given. You should fi All of this is biased to my experience, and you should look beyond just this article. Every new day you study, every hour you invest more time, your skills will sharpen. Focus on what you can control. I hope you got some value from the article. If you are interested in reading more about my job searching and landing my first role as a software engineer, checkout out these articles: - - [How I Became a Software Engineer at 20 With No CS Degree](https://sppencerlepine.com/blog-TODO) - - [My Coding Bootcamp Experience at Hack Reactor](https://sppencerlepine.com/blog-TODO) - -Follow my journey and connected with me here: -- LinkedIn: [/in/spencer-lepine](https://www.linkedin.com/in/spencer-lepine/) -- Twitter: [@SpencerLepine](https://twitter.com/spencerlepine) -- Email: [spencer.sayhello@gmail.com](mailto:spencer.sayhello@gmail.com) \ No newline at end of file + - [How I Became a Software Engineer at 20 With No CS Degree](https://spencerlepine.github.io/blog/todo) + - [My Coding Bootcamp Experience at Hack Reactor](https://spencerlepine.github.io/blog/todo) + +Follow my journey or connect with me here: +- LinkedIn: [/in/spencerlepine](https://www.linkedin.com/in/spencerlepine/) +- Email: [spencer.sayhello@gmail.com](mailto:spencer.sayhello@gmail.com) +- Portfolio: [spencerlepine.com](https://spencerlepine.com) +- https://github.com/spencerlepine +- Twitter: [@spencerlepine](https://twitter.com/spencerlepine) diff --git a/content/preparing-for-my-amazon-front-end-engineer-interview/index.md b/content/preparing-for-my-amazon-front-end-engineer-interview/index.md new file mode 100644 index 0000000..c07c4bc --- /dev/null +++ b/content/preparing-for-my-amazon-front-end-engineer-interview/index.md @@ -0,0 +1,102 @@ +--- +title: Preparing for My Amazon Front End Engineer Interview +slug: preparing-for-my-amazon-front-end-engineer-interview +tags: [Job Search, Amazon, AWS, LeetCode, Interviews] +authors: [spencerlepine] +date: 2022-11-20T12:00 +--- + +![Blog Post Thumbnail](./thumbnail.jpg) + +After my recent interviews with Amazon for the Front End Engineer role, I thought I should share my experience and advice. I got numerous requests on LinkedIn about my interview +experience, so I decided to document everything to pass along. + +TL;DR - practice LeetCode, practice vanilla JS, always verbalize your thought process, prepare stories to share in the STAR framework, and be confident (or fake it). + +If you are interested in general advice for landing your first role in tech, I wrote another article about that: +[How I Became a Software Engineer at 20 With No CS Degree](https://spencerlepine.github.io/blog/todo) + +For the Amazon FEE interviews, you can find a lot of material online, since thousands of people are applying to these roles. In this article I will share the resources that helped +me and give advice on what to study. + +Study Material & Resources: + +Remember, your competition will get their hands on any information that helps them perform better in an interview. You should too. Get exposure to all the material you can, +read/explore these resources to learn as much as possible: + +- Article: [Amazon Front-End Engineer Interview Prep](https://www.interviewkickstart.com/companies/amazon-front-end-engineer-interview-prep) +- Article: [Understanding Amazon’s Front-End Engineering Interview](https://xjamundx.medium.com/understanding-amazons-front-end-engineering-interview-5e9f38b58058) +- Article: [How I Landed a Front-End Engineering Job at Amazon](https://xjamundx.medium.com/how-i-got-a-front-end-engineering-job-at-amazon-807e26c33915) +- Study Topic: [Amazon front end interview questions](https://www.frontendinterviewhandbook.com/companies/amazon-front-end-interview-questions) +- LeetCode Post: [Amazon Virtual Onsite - FrontEnd Engineer II Offer](https://leetcode.com/discuss/interview-question/694045/amazon-virtual-onsite-frontend-engineer-ii-offer) +- Google search: [“reddit amazon front end engineer”](https://www.google.com/search?q=reddit+amazon+front+end+engineer) +- Youtube Video: [Amazon Front End Interview Prep | Software Engineer](https://www.youtube.com/watch?v=rMWDtxJQIbQ) +- Youtube Video: [Amazon Frontend Interview Experience 2022 | Frontend Engineer 2](https://www.youtube.com/watch?v=jI4WfkudBb8) +- Youtube Video: [Amazon Interview Experience | Software Engineer](https://www.youtube.com/watch?v=baT3OzbOg5s&ab_channel=KeepOnCoding) +- Youtube Video: + [Rejected by Amazon - Frontend and Software Engineer - Full Loop Interview Exp with no degree at all](https://www.youtube.com/watch?v=gTIS4waIpG4&ab_channel=CodePhony) +- Youtube Video: [Amazon Behavioral Interview Questions | Leadership Principles Explained](https://www.youtube.com/watch?v=6p1m2nCE7jE&ab_channel=Exponent) + +### Soft skills & Behavioral Questions: + +First, let’s touch upon soft skills and behavioral preparation. You must learn about the STAR framework, as well as the leadership principles. My advice would be to prepare one +story per LP, as you will need to share concise answers for behavioral questions. You can practice telling a story (conflict with coworker, time you applied advice from a peer…) in +the STAR framework. STAR gives you structure to describe what happened with context and outcomes. + +Every interview could start with a few behavioral questions. If you try to wing it and think of stories in the moment, it likely will leave a bad impression. + +An important theme is that technical skills can easily be taught, soft skills cannot. A valuable candidate has great personality and collaboration skills with the team. If you were +hiring someone, that is what you would look for. Many people have talent. What sets you apart is how you present/market yourself, being memorable/leaving an impression. It is +essential to communicate well or have story-telling skills. + +### Technical Skills & Coding Challenges: + +Now let’s talk about technical skills and the coding challenges. You may be curious about exactly what I studied to prepare for this interview. + +Essentials to study: + +- LeetCode problems +- Vanilla JS practice (practice building simple widgets) +- Frontend system design basics + +I would recommend practicing LeetCode style problems and exercise a problem solving process. Think about the input/output of the problem. Think briefly about edge +cases/constraints. Write some pseudo-code first, then attempt the problem. A huge focus should be verbalizing your through process the entire time. Avoid going silent for too long, +the interviewer is trying to understand how you work through the challenge. + +Recording myself to solve LeetCode problems helped me a ton (like this [video](https://www.youtube.com/watch?v=rwEaDpdZuQg)). It can be uncomfortable and you might be bad to start, +but you will always improve. + +In FEE interviews, you likely can solve coding problems in any language you prefer. Although you will need to know vanilla JS, you could solve a few problems in Python if +preferred. + +Specific to the Amazon interview, they will likely test your frontend skills building a small widget with plain HTML/CSS/JS. You may need to brush up on the DOM methods and syntax. +I recommend building several small widgets to practice. Make sure you understand selectors, event listeners, and modifying styles with JS. If you are unsure what kinds of widgets I +am talking about, look through [this](https://www.frontendinterviewhandbook.com/companies/amazon-front-end-interview-questions] page. + +Something to note, they don’t always use an IDE for these interviews. This allows you to focus on the concept and the design. You save time since you aren’t fixing little bugs to +get an HTML page rendered. + +Lastly, for the front end system design, make sure you understand the basics. Get familiar with state management, performance optimizations, and accessibility. This +[playlist](https://www.youtube.com/playlist?list=PLI9W87-Dqn7j_x6QtR6sUjycJR7nQLBqT) has TONS of detailed videos. + +I will not go into detail about the specific problems I was given. You should find plenty of these in the links I shared above. This article should provide enough momentum to get +started, and you will know best to study for. + +### Conclusion + +All of this is biased to my experience, and you should look beyond just this article. Every new day you study, every hour you invest more time, your skills will sharpen. Focus on +what you can control. + +I hope you got some value from the article. If you are interested in reading more about my job searching and landing my first role as a software engineer, checkout out these +articles: + +- [How I Became a Software Engineer at 20 With No CS Degree](https://spencerlepine.github.io/blog/todo) +- [My Coding Bootcamp Experience at Hack Reactor](https://spencerlepine.github.io/blog/todo) + +Follow my journey or connect with me here: + +- LinkedIn: [/in/spencerlepine](https://www.linkedin.com/in/spencerlepine/) +- Email: [spencer.sayhello@gmail.com](mailto:spencer.sayhello@gmail.com) +- Portfolio: [spencerlepine.com](https://spencerlepine.com) +- https://github.com/spencerlepine +- Twitter: [@spencerlepine](https://twitter.com/spencerlepine) diff --git a/content/preparing-for-my-amazon-front-end-engineer-interview/index.medium b/content/preparing-for-my-amazon-front-end-engineer-interview/index.medium new file mode 100644 index 0000000..7088de2 --- /dev/null +++ b/content/preparing-for-my-amazon-front-end-engineer-interview/index.medium @@ -0,0 +1,77 @@ +--- +title:Preparing for My Amazon Front End Engineer Interview +description: My experience and advice for the Amazon Front End Engineer interview. +publish_status: "draft" +--- + +![Blog Post Thumbnail](./thumbnail.jpg) + +After my recent interviews with Amazon for the Front End Engineer role, I thought I should share my experience and advice. I got numerous requests on LinkedIn about my interview experience, so I decided to document everything to pass along. + +TL;DR - practice LeetCode, practice vanilla JS, always verbalize your thought process, prepare stories to share in the STAR framework, and be confident (or fake it). + +If you are interested in general advice for landing your first role in tech, I wrote another article about that: [How I Became a Software Engineer at 20 With No CS Degree](https://spencerlepine.github.io/blog/todo) + +For the Amazon FEE interviews, you can find a lot of material online, since thousands of people are applying to these roles. In this article I will share the resources that helped me and give advice on what to study. + +Study Material & Resources: + +Remember, your competition will get their hands on any information that helps them perform better in an interview. You should too. Get exposure to all the material you can, read/explore these resources to learn as much as possible: + +- Article: [Amazon Front-End Engineer Interview Prep](https://www.interviewkickstart.com/companies/amazon-front-end-engineer-interview-prep) +- Article: [Understanding Amazon’s Front-End Engineering Interview](https://xjamundx.medium.com/understanding-amazons-front-end-engineering-interview-5e9f38b58058) +- Article: [How I Landed a Front-End Engineering Job at Amazon](https://xjamundx.medium.com/how-i-got-a-front-end-engineering-job-at-amazon-807e26c33915) +- Study Topic: [Amazon front end interview questions](https://www.frontendinterviewhandbook.com/companies/amazon-front-end-interview-questions) +- LeetCode Post: [Amazon Virtual Onsite - FrontEnd Engineer II Offer](https://leetcode.com/discuss/interview-question/694045/amazon-virtual-onsite-frontend-engineer-ii-offer) +- Google search: [“reddit amazon front end engineer”](https://www.google.com/search?q=reddit+amazon+front+end+engineer) +- Youtube Video: [Amazon Front End Interview Prep | Software Engineer](https://www.youtube.com/watch?v=rMWDtxJQIbQ) +- Youtube Video: [Amazon Frontend Interview Experience 2022 | Frontend Engineer 2](https://www.youtube.com/watch?v=jI4WfkudBb8) +- Youtube Video: [Amazon Interview Experience | Software Engineer](https://www.youtube.com/watch?v=baT3OzbOg5s&ab_channel=KeepOnCoding) +- Youtube Video: [Rejected by Amazon - Frontend and Software Engineer - Full Loop Interview Exp with no degree at all](https://www.youtube.com/watch?v=gTIS4waIpG4&ab_channel=CodePhony) +- Youtube Video: [Amazon Behavioral Interview Questions | Leadership Principles Explained](https://www.youtube.com/watch?v=6p1m2nCE7jE&ab_channel=Exponent) + +### Soft skills & Behavioral Questions: + +First, let’s touch upon soft skills and behavioral preparation. You must learn about the STAR framework, as well as the leadership principles. My advice would be to prepare one story per LP, as you will need to share concise answers for behavioral questions. You can practice telling a story (conflict with coworker, time you applied advice from a peer…) in the STAR framework. STAR gives you structure to describe what happened with context and outcomes. + +Every interview could start with a few behavioral questions. If you try to wing it and think of stories in the moment, it likely will leave a bad impression. + +An important theme is that technical skills can easily be taught, soft skills cannot. A valuable candidate has great personality and collaboration skills with the team. If you were hiring someone, that is what you would look for. Many people have talent. What sets you apart is how you present/market yourself, being memorable/leaving an impression. It is essential to communicate well or have story-telling skills. + +### Technical Skills & Coding Challenges: + +Now let’s talk about technical skills and the coding challenges. You may be curious about exactly what I studied to prepare for this interview. + +Essentials to study: + - LeetCode problems + - Vanilla JS practice (practice building simple widgets) +- Frontend system design basics + +I would recommend practicing LeetCode style problems and exercise a problem solving process. Think about the input/output of the problem. Think briefly about edge cases/constraints. Write some pseudo-code first, then attempt the problem. A huge focus should be verbalizing your through process the entire time. Avoid going silent for too long, the interviewer is trying to understand how you work through the challenge. + +Recording myself to solve LeetCode problems helped me a ton (like this [video](https://www.youtube.com/watch?v=rwEaDpdZuQg)). It can be uncomfortable and you might be bad to start, but you will always improve. + +In FEE interviews, you likely can solve coding problems in any language you prefer. Although you will need to know vanilla JS, you could solve a few problems in Python if preferred. + +Specific to the Amazon interview, they will likely test your frontend skills building a small widget with plain HTML/CSS/JS. You may need to brush up on the DOM methods and syntax. I recommend building several small widgets to practice. Make sure you understand selectors, event listeners, and modifying styles with JS. If you are unsure what kinds of widgets I am talking about, look through [this](https://www.frontendinterviewhandbook.com/companies/amazon-front-end-interview-questions] page. + +Something to note, they don’t always use an IDE for these interviews. This allows you to focus on the concept and the design. You save time since you aren’t fixing little bugs to get an HTML page rendered. + +Lastly, for the front end system design, make sure you understand the basics. Get familiar with state management, performance optimizations, and accessibility. This [playlist](https://www.youtube.com/playlist?list=PLI9W87-Dqn7j_x6QtR6sUjycJR7nQLBqT) has TONS of detailed videos. + +I will not go into detail about the specific problems I was given. You should find plenty of these in the links I shared above. This article should provide enough momentum to get started, and you will know best to study for. + +### Conclusion + +All of this is biased to my experience, and you should look beyond just this article. Every new day you study, every hour you invest more time, your skills will sharpen. Focus on what you can control. + +I hope you got some value from the article. If you are interested in reading more about my job searching and landing my first role as a software engineer, checkout out these articles: + - [How I Became a Software Engineer at 20 With No CS Degree](https://spencerlepine.github.io/blog/todo) + - [My Coding Bootcamp Experience at Hack Reactor](https://spencerlepine.github.io/blog/todo) + +Follow my journey or connect with me here: +- LinkedIn: [/in/spencerlepine](https://www.linkedin.com/in/spencerlepine/) +- Email: [spencer.sayhello@gmail.com](mailto:spencer.sayhello@gmail.com) +- Portfolio: [spencerlepine.com](https://spencerlepine.com) +- https://github.com/spencerlepine +- Twitter: [@spencerlepine](https://twitter.com/spencerlepine) diff --git a/blog/preparing-for-my-amazon-front-end-engineer-interview/thumbnail.jpg b/content/preparing-for-my-amazon-front-end-engineer-interview/thumbnail.jpg similarity index 100% rename from blog/preparing-for-my-amazon-front-end-engineer-interview/thumbnail.jpg rename to content/preparing-for-my-amazon-front-end-engineer-interview/thumbnail.jpg diff --git a/content/quickly-open-github-repo-in-browser-from-terminal/index.devto b/content/quickly-open-github-repo-in-browser-from-terminal/index.devto new file mode 100644 index 0000000..004430d --- /dev/null +++ b/content/quickly-open-github-repo-in-browser-from-terminal/index.devto @@ -0,0 +1,107 @@ +--- +title: Quickly Open GitHub Repo in Browser From Terminal +description: Create custom Git commands to open the repository web page in the browser. +tags: 'git, workflow, terminal' # Dev.to (Max 3) +canonical_url: null # Dev.to +--- + +![Blog Post Thumbnail](./thumbnail.jpg) + +I work a lot with the Git CLI and GitHub repository cloned on my local machine. I need a fast way to open the repository web page in the browser. Here is how I solved this, specifically on macOS. + +To start, the the quickest way to get the remote url is the following bash command: + +```sh +git remote -v | awk '/origin.*push/ {print $2}' | xargs open +``` + +That command alone is not very helpful, since it will be difficult to memorize and type out repeatedly. + +Instead, we can create a user-friendly command to use in the macOS terminal. By creating a custom named script in the `bin` directory, the terminal will execute it when the command is used. + +First, navigate to the bin directory: + +```sh +$ cd ~/../../usr/local/bin + +# Make sure this path matches up with your +# configuration for the terminal (e.g. PATH=$PATH:$HOME/bin) +``` + +Now create the script file, here I named the command `repo-open`: + +```sh +$ vim repo-open +``` + +Now paste the script contents into the file editor: + +```sh + #!/bin/bash + +git remote -v | awk '/origin.*push/ {print $2}' | xargs open +``` + +Save the file: + +- press ‘ESC’ +- press ‘SHIFT’ + ‘:’ +- type ‘wq’ + ENTER + +Create the executable: +```sh +$ chmod +x repo-open +``` +That’s it! Now you can run the new script in the terminal. If we are in a directory with a `.git` folder, we can run `repo-open`, and it will open the remote URL in the default browser. + +```sh +$ repo-open +# opens new page in the browser +``` + +Optionally, you can dig a little deeper into writing these scripts. Here are a few examples for Mac and Windows: + +Bash script for Mac: +```sh +function gbrowse { + gbrowsevar=$(git config --get remote.origin.url) + printf "${gbrowsevar}" + start $gbrowsevar +} +``` + +Script for Windows: +```sh +# GIT: open remote repository using Google Chrome +function gbrowse { + NC='\033[0m' # No Color + SB='\033[1;34m' # Sky Blue + PP='\033[1;35m' # Purple + gbrowsevar=$(git config --get remote.origin.url) + printf "\n${PP}→ ${SB}gbrowse:${NC} Chrome\n" + printf "${gbrowsevar}\n" + start chrome $gbrowsevar +} + +# GIT: open remote repository using Firefox +function fbrowse { + NC='\033[0m' # No Color + SB='\033[1;34m' # Sky Blue + PP='\033[1;35m' # Purple + fbrowsevar=$(git config --get remote.origin.url) + printf "\n${PP}→ ${SB}fbrowse:${NC} Firefox\n" + printf "${fbrowsevar}\n" + start firefox $fbrowsevar +} +``` + +--- + +That’s all for today, I hope this article was helpful. If you have any questions, feel free to connect with me. + +Follow my journey or connect with me here: +- LinkedIn: [/in/spencerlepine](https://www.linkedin.com/in/spencerlepine/) +- Email: [spencer.sayhello@gmail.com](mailto:spencer.sayhello@gmail.com) +- Portfolio: [spencerlepine.com](https://spencerlepine.com) +- https://github.com/spencerlepine +- Twitter: [@spencerlepine](https://twitter.com/spencerlepine) diff --git a/content/quickly-open-github-repo-in-browser-from-terminal/index.md b/content/quickly-open-github-repo-in-browser-from-terminal/index.md new file mode 100644 index 0000000..83777af --- /dev/null +++ b/content/quickly-open-github-repo-in-browser-from-terminal/index.md @@ -0,0 +1,116 @@ +--- +title: Quickly Open GitHub Repo in Browser From Terminal +slug: quickly-open-github-repo-in-browser-from-terminal +tags: [Git, Workflow, GitHub, Scripts, Terminal, Commands] +authors: [spencerlepine] +date: 2022-10-22T12:00 +--- + +![Blog Post Thumbnail](./thumbnail.jpg) + +I work a lot with the Git CLI and GitHub repository cloned on my local machine. I need a fast way to open the repository web page in the browser. Here is how I solved this, +specifically on macOS. + +To start, the the quickest way to get the remote url is the following bash command: + +```sh +git remote -v | awk '/origin.*push/ {print $2}' | xargs open +``` + +That command alone is not very helpful, since it will be difficult to memorize and type out repeatedly. + +Instead, we can create a user-friendly command to use in the macOS terminal. By creating a custom named script in the `bin` directory, the terminal will execute it when the command +is used. + +First, navigate to the bin directory: + +```sh +$ cd ~/../../usr/local/bin + +# Make sure this path matches up with your +# configuration for the terminal (e.g. PATH=$PATH:$HOME/bin) +``` + +Now create the script file, here I named the command `repo-open`: + +```sh +$ vim repo-open +``` + +Now paste the script contents into the file editor: + +```sh + #!/bin/bash + +git remote -v | awk '/origin.*push/ {print $2}' | xargs open +``` + +Save the file: + +- press ‘ESC’ +- press ‘SHIFT’ + ‘:’ +- type ‘wq’ + ENTER + +Create the executable: + +```sh +$ chmod +x repo-open +``` + +That’s it! Now you can run the new script in the terminal. If we are in a directory with a `.git` folder, we can run `repo-open`, and it will open the remote URL in the default +browser. + +```sh +$ repo-open +# opens new page in the browser +``` + +Optionally, you can dig a little deeper into writing these scripts. Here are a few examples for Mac and Windows: + +Bash script for Mac: + +```sh +function gbrowse { + gbrowsevar=$(git config --get remote.origin.url) + printf "${gbrowsevar}" + start $gbrowsevar +} +``` + +Script for Windows: + +```sh +# GIT: open remote repository using Google Chrome +function gbrowse { + NC='\033[0m' # No Color + SB='\033[1;34m' # Sky Blue + PP='\033[1;35m' # Purple + gbrowsevar=$(git config --get remote.origin.url) + printf "\n${PP}→ ${SB}gbrowse:${NC} Chrome\n" + printf "${gbrowsevar}\n" + start chrome $gbrowsevar +} + +# GIT: open remote repository using Firefox +function fbrowse { + NC='\033[0m' # No Color + SB='\033[1;34m' # Sky Blue + PP='\033[1;35m' # Purple + fbrowsevar=$(git config --get remote.origin.url) + printf "\n${PP}→ ${SB}fbrowse:${NC} Firefox\n" + printf "${fbrowsevar}\n" + start firefox $fbrowsevar +} +``` + +--- + +That’s all for today, I hope this article was helpful. If you have any questions, feel free to connect with me. + +Follow my journey or connect with me here: + +- LinkedIn: [/in/spencerlepine](https://www.linkedin.com/in/spencerlepine/) +- Email: [spencer.sayhello@gmail.com](mailto:spencer.sayhello@gmail.com) +- Portfolio: [spencerlepine.com](https://spencerlepine.com) +- https://github.com/spencerlepine +- Twitter: [@spencerlepine](https://twitter.com/spencerlepine) diff --git a/blog/quickly-open-github-repo-in-browser-from-terminal/quickly-open-github-repo-in-browser-from-terminal.md b/content/quickly-open-github-repo-in-browser-from-terminal/index.medium similarity index 83% rename from blog/quickly-open-github-repo-in-browser-from-terminal/quickly-open-github-repo-in-browser-from-terminal.md rename to content/quickly-open-github-repo-in-browser-from-terminal/index.medium index a8ae2e3..1b67d5a 100644 --- a/blog/quickly-open-github-repo-in-browser-from-terminal/quickly-open-github-repo-in-browser-from-terminal.md +++ b/content/quickly-open-github-repo-in-browser-from-terminal/index.medium @@ -1,9 +1,7 @@ --- title: Quickly Open GitHub Repo in Browser From Terminal -slug: quickly-open-github-repo-in-browser-from-terminal -tags: [Git, Workflow, GitHub, Scripts, Terminal, Commands] -authors: [spencerlepine] -date: 2022-10-22T12:00 +description: Create custom Git commands to open the repository web page in the browser. +publish_status: "draft" --- ![Blog Post Thumbnail](./thumbnail.jpg) @@ -98,8 +96,11 @@ function fbrowse { --- -That’s all for today, I hope this article was helpful. If you have any questions, feel free to connect with me. You can find my profiles here: -* [Twitter (@spencerlepine)](https://twitter.com/SpencerLepine) -* [GitHub (@spencerlepine)](https://github.com/spencerlepine) -* [LinkedIn](https://www.linkedin.com/in/spencer-lepine/) -* [YouTube (Spencer Lepine)](https://www.youtube.com/channel/UCBL6vAHJZqUlyJp-rcFU55Q) \ No newline at end of file +That’s all for today, I hope this article was helpful. If you have any questions, feel free to connect with me. + +Follow my journey or connect with me here: +- LinkedIn: [/in/spencerlepine](https://www.linkedin.com/in/spencerlepine/) +- Email: [spencer.sayhello@gmail.com](mailto:spencer.sayhello@gmail.com) +- Portfolio: [spencerlepine.com](https://spencerlepine.com) +- https://github.com/spencerlepine +- Twitter: [@spencerlepine](https://twitter.com/spencerlepine) diff --git a/blog/quickly-open-github-repo-in-browser-from-terminal/thumbnail.jpg b/content/quickly-open-github-repo-in-browser-from-terminal/thumbnail.jpg similarity index 100% rename from blog/quickly-open-github-repo-in-browser-from-terminal/thumbnail.jpg rename to content/quickly-open-github-repo-in-browser-from-terminal/thumbnail.jpg diff --git a/content/software-engineering-workflow/index.devto b/content/software-engineering-workflow/index.devto new file mode 100644 index 0000000..8fd2733 --- /dev/null +++ b/content/software-engineering-workflow/index.devto @@ -0,0 +1,115 @@ +--- +title: Software Engineering Workflow +description: Collection of software and resources I use for work. # Dev.to +tags: 'workflow, tools, software engineer' # Dev.to (Max 3) +canonical_url: null # Dev.to +--- + +![Blog Post Thumbnail](./thumbnail.jpg) + +This is a collection of resources and my general workflow for Software Engineering. Note: workstation is running MacOS. + +### **Dependencies/Libraries:** + +* [Homebrew](https://brew.sh/) - package manager for linux-based OSs. +* [Git](https://git-scm.com/downloads) - version control, manage files during project development +* [Node.js](https://nodejs.org/en/download/) + [Nvm](https://github.com/nvm-sh/nvm) - runtime for javascript without a browser +* [Npm](https://docs.npmjs.com/downloading-and-installing-node-js-and-npm) - large organization of libraries/packages available to use in projects. +* [Python 3](https://www.python.org/downloads/) - python language interpreter for python ^3.0.0. +* [MySQL](https://www.mysql.com/products/workbench/) - SQL database software for development +* [Redis](https://redis.io/) - real time data storage with different data structures in a cache +* [Heroku CLI](https://devcenter.heroku.com/articles/heroku-cli) - manager for Heroku apps from the command line +* [Amazon CLI](https://aws.amazon.com/cli/) - manager for AWS services from the command line + + +### **Communication:** + +* [Slack](https://slack.com/) +* [Zoom](https://zoom.us/) + + +### **Recording:** + +* [OBS](https://obsproject.com/) +* [Zoom Meeting Recording](https://zoom.us/) + + +### **Other Software:** + +* [Chrome](https://www.google.com/chrome/) - main browser with debugging tools +* [Postman](https://www.postman.com/) - API platform for easy endpoint testing +* [Flux](https://justgetflux.com/) - screen eye strain assistance +* [GIMP](https://www.gimp.org/) - photo editing software + + +### **Toy problems:** + +* [ExcaliDraw](https://excalidraw.com/) +* [LeetCode](https://leetcode.com/) +* [CyberDojo](https://cyber-dojo.org/creator/home) +* [TopCoder](https://www.topcoder.com/) +* [Coderbyte](https://coderbyte.com/) +* [HackerRank](https://www.hackerrank.com/) + + +### **Note taking:** + +* [Notion](https://www.notion.so/) + + +### **IDE:** + +[VSCode](https://code.visualstudio.com/download): +* MacOS Quick Action: Open Folder from finder -> [Configure Quick Action](https://stackoverflow.com/questions/64040393/open-a-folder-in-vscode-through-finder-in-macos) +* [ESLint](https://marketplace.visualstudio.com/items?itemName=dbaeumer.vscode-eslint) Extension - Integrates ESLint JavaScript into VS Code. +* [Bracket Pair Colorizer](https://marketplace.visualstudio.com/items?itemName=CoenraadS.bracket-pair-colorizer) Extension - A customizable extension for colorizing matching brackets +* [Open In Default Browser](https://marketplace.visualstudio.com/items?itemName=peakchen90.open-html-in-browser) Extension - A VSCode extension to fast open html file in browser +* [Stylelint](https://marketplace.visualstudio.com/items?itemName=stylelint.vscode-stylelint) Extension - Modern CSS/SCSS/Less linter +* settings.json: + + ```json + { + "editor.lightbulb.enabled": false, + "editor.parameterHints.enabled": false, + "editor.renderWhitespace": "all", + "editor.snippetSuggestions": "none", + "editor.tabSize": 2, + "editor.wordWrap": "off", + "emmet.showExpandedAbbreviation": "never", + "files.trimTrailingWhitespace": true, + "javascript.suggest.enabled": false, + "javascript.updateImportsOnFileMove.enabled": "never", + "javascript.validate.enable": false, + "eslint.alwaysShowStatus": true, + "explorer.confirmDelete": false, + "python.pythonPath": "/usr/bin/python3", + "workbench.editorAssociations": { + "*.ipynb": "jupyter.notebook.ipynb" + }, + "[javascript]": { + "editor.defaultFormatter": "vscode.typescript-language-features" + }, + "css.validate": false, + "window.zoomLevel": 2, + "editor.hover.sticky": false, + "editor.formatOnPaste": true, + "editor.formatOnSave": true, + "editor.defaultFormatter": "vscode.json-language-features", + "workbench.iconTheme": "material-icon-theme", + "security.workspace.trust.untrustedFiles": "open", + "liveshare.allowGuestTaskControl": true, + "liveshare.allowGuestDebugControl": true, + "liveshare.anonymousGuestApproval": "accept", + "python.defaultInterpreterPath": "/usr/bin/python3", + "editor.largeFileOptimizations": false, + } + ``` + +## **Interested in working together?** + +Follow my journey or connect with me here: +- LinkedIn: [/in/spencerlepine](https://www.linkedin.com/in/spencerlepine/) +- Email: [spencer.sayhello@gmail.com](mailto:spencer.sayhello@gmail.com) +- Portfolio: [spencerlepine.com](https://spencerlepine.com) +- https://github.com/spencerlepine +- Twitter: [@spencerlepine](https://twitter.com/spencerlepine) diff --git a/content/software-engineering-workflow/index.md b/content/software-engineering-workflow/index.md new file mode 100644 index 0000000..281a0e2 --- /dev/null +++ b/content/software-engineering-workflow/index.md @@ -0,0 +1,113 @@ +--- +title: Software Engineering Workflow +slug: software-engineering-workflow +tags: [Workflow, Tools, Software Engineer] +authors: [spencerlepine] +date: 2021-08-13T12:00 +--- + +![Blog Post Thumbnail](./thumbnail.jpg) + +This is a collection of resources and my general workflow for Software Engineering. Note: workstation is running MacOS. + +### **Dependencies/Libraries:** + +- [Homebrew](https://brew.sh/) - package manager for linux-based OSs. +- [Git](https://git-scm.com/downloads) - version control, manage files during project development +- [Node.js](https://nodejs.org/en/download/) + [Nvm](https://github.com/nvm-sh/nvm) - runtime for javascript without a browser +- [Npm](https://docs.npmjs.com/downloading-and-installing-node-js-and-npm) - large organization of libraries/packages available to use in projects. +- [Python 3](https://www.python.org/downloads/) - python language interpreter for python ^3.0.0. +- [MySQL](https://www.mysql.com/products/workbench/) - SQL database software for development +- [Redis](https://redis.io/) - real time data storage with different data structures in a cache +- [Heroku CLI](https://devcenter.heroku.com/articles/heroku-cli) - manager for Heroku apps from the command line +- [Amazon CLI](https://aws.amazon.com/cli/) - manager for AWS services from the command line + +### **Communication:** + +- [Slack](https://slack.com/) +- [Zoom](https://zoom.us/) + +### **Recording:** + +- [OBS](https://obsproject.com/) +- [Zoom Meeting Recording](https://zoom.us/) + +### **Other Software:** + +- [Chrome](https://www.google.com/chrome/) - main browser with debugging tools +- [Postman](https://www.postman.com/) - API platform for easy endpoint testing +- [Flux](https://justgetflux.com/) - screen eye strain assistance +- [GIMP](https://www.gimp.org/) - photo editing software + +### **Toy problems:** + +- [ExcaliDraw](https://excalidraw.com/) +- [LeetCode](https://leetcode.com/) +- [CyberDojo](https://cyber-dojo.org/creator/home) +- [TopCoder](https://www.topcoder.com/) +- [Coderbyte](https://coderbyte.com/) +- [HackerRank](https://www.hackerrank.com/) + +### **Note taking:** + +- [Notion](https://www.notion.so/) + +### **IDE:** + +[VSCode](https://code.visualstudio.com/download): + +- MacOS Quick Action: Open Folder from finder -> [Configure Quick Action](https://stackoverflow.com/questions/64040393/open-a-folder-in-vscode-through-finder-in-macos) +- [ESLint](https://marketplace.visualstudio.com/items?itemName=dbaeumer.vscode-eslint) Extension - Integrates ESLint JavaScript into VS Code. +- [Bracket Pair Colorizer](https://marketplace.visualstudio.com/items?itemName=CoenraadS.bracket-pair-colorizer) Extension - A customizable extension for colorizing matching + brackets +- [Open In Default Browser](https://marketplace.visualstudio.com/items?itemName=peakchen90.open-html-in-browser) Extension - A VSCode extension to fast open html file in browser +- [Stylelint](https://marketplace.visualstudio.com/items?itemName=stylelint.vscode-stylelint) Extension - Modern CSS/SCSS/Less linter +- settings.json: + + ```json + { + "editor.lightbulb.enabled": false, + "editor.parameterHints.enabled": false, + "editor.renderWhitespace": "all", + "editor.snippetSuggestions": "none", + "editor.tabSize": 2, + "editor.wordWrap": "off", + "emmet.showExpandedAbbreviation": "never", + "files.trimTrailingWhitespace": true, + "javascript.suggest.enabled": false, + "javascript.updateImportsOnFileMove.enabled": "never", + "javascript.validate.enable": false, + "eslint.alwaysShowStatus": true, + "explorer.confirmDelete": false, + "python.pythonPath": "/usr/bin/python3", + "workbench.editorAssociations": { + "*.ipynb": "jupyter.notebook.ipynb" + }, + "[javascript]": { + "editor.defaultFormatter": "vscode.typescript-language-features" + }, + "css.validate": false, + "window.zoomLevel": 2, + "editor.hover.sticky": false, + "editor.formatOnPaste": true, + "editor.formatOnSave": true, + "editor.defaultFormatter": "vscode.json-language-features", + "workbench.iconTheme": "material-icon-theme", + "security.workspace.trust.untrustedFiles": "open", + "liveshare.allowGuestTaskControl": true, + "liveshare.allowGuestDebugControl": true, + "liveshare.anonymousGuestApproval": "accept", + "python.defaultInterpreterPath": "/usr/bin/python3", + "editor.largeFileOptimizations": false + } + ``` + +## **Interested in working together?** + +Follow my journey or connect with me here: + +- LinkedIn: [/in/spencerlepine](https://www.linkedin.com/in/spencerlepine/) +- Email: [spencer.sayhello@gmail.com](mailto:spencer.sayhello@gmail.com) +- Portfolio: [spencerlepine.com](https://spencerlepine.com) +- https://github.com/spencerlepine +- Twitter: [@spencerlepine](https://twitter.com/spencerlepine) diff --git a/blog/software-engineering-workflow/software-engineering-workflow.md b/content/software-engineering-workflow/index.medium similarity index 91% rename from blog/software-engineering-workflow/software-engineering-workflow.md rename to content/software-engineering-workflow/index.medium index b11803a..f98db9c 100644 --- a/blog/software-engineering-workflow/software-engineering-workflow.md +++ b/content/software-engineering-workflow/index.medium @@ -1,9 +1,7 @@ --- title: Software Engineering Workflow -slug: software-engineering-workflow -tags: [Workflow, Tools, Software Engineer] -authors: [spencerlepine] -date: 2021-08-13T12:00 +description: Collection of software and resources I use for work. +publish_status: "draft" --- ![Blog Post Thumbnail](./thumbnail.jpg) @@ -108,8 +106,9 @@ This is a collection of resources and my general workflow for Software Engineeri ## **Interested in working together?** -Let’s connect! Find me on any of my socials linked below: - -* [Twitter (@spencerlepine)](https://twitter.com/SpencerLepine) -* [GitHub (@spencerlepine)](https://github.com/spencerlepine) -* [YouTube (Spencer Lepine)](https://www.youtube.com/channel/UCBL6vAHJZqUlyJp-rcFU55Q) +Follow my journey or connect with me here: +- LinkedIn: [/in/spencerlepine](https://www.linkedin.com/in/spencerlepine/) +- Email: [spencer.sayhello@gmail.com](mailto:spencer.sayhello@gmail.com) +- Portfolio: [spencerlepine.com](https://spencerlepine.com) +- https://github.com/spencerlepine +- Twitter: [@spencerlepine](https://twitter.com/spencerlepine) diff --git a/blog/software-engineering-workflow/thumbnail.jpg b/content/software-engineering-workflow/thumbnail.jpg similarity index 100% rename from blog/software-engineering-workflow/thumbnail.jpg rename to content/software-engineering-workflow/thumbnail.jpg diff --git a/content/typescript-development-set-up-for-vscode/index.devto b/content/typescript-development-set-up-for-vscode/index.devto new file mode 100644 index 0000000..085d44c --- /dev/null +++ b/content/typescript-development-set-up-for-vscode/index.devto @@ -0,0 +1,129 @@ +--- +title: TypeScript Development Set Up for VSCode +description: Brief walkthrough for TypeScript development in VSCode. # Dev.to +tags: 'typescript, vscode, workflow' # Dev.to (Max 3) +canonical_url: null # Dev.to +--- + +![](./thumbnail.jpg) + +Looking to set up VSCode for a TypeScript project? This article will walk through the initial configuration steps for just that. + +This walk-through assumes you have the already have the following installed: + +- [VSCode](https://code.visualstudio.com/) +- [Node.js](https://nodejs.org/en/download/) +- [npm](https://docs.npmjs.com/downloading-and-installing-node-js-and-npm) + +To begin, let’s install the `typescript` module globally: + +```sh +$ npm install -g typescript +$ tsc --version +``` + +Great! Now we can code our `.ts` files. A small issue remains. In order to compile those files to `.js`, we must run `tsc` or `npm run build` manually. To avoid this, we can auto-compile __on save__ in our IDE. + +## Configure Auto-Compile on Save + +Using the build tasks in VSCode, you can trigger auto-compile on save. This can be done through the settings search bar, or the application's menu bar in macOS. + +### Option 1: + +- Enter `CMD+SHIFT+P` +- Select: `Tasks: Configure Default Build Task` +- Choose: `tsc: watch - tsconfig.json` + +### Option 2: + +- Find tabs: in OS VSCode menu bar +- Click: `Terminal` (open dropdown) +- Select: `Configure Default Build Task` +- Choose: `tsc watch` option + +Now during development, anytime you save the `.ts` files, it will generate `.js` output automatically. + +## Hiding Compiler Output In Editor File Tree + +After running `tsc build`, it will populate the directory with compiled files. You may find `.js`, `.js.map`, or `.d.ts.` files, which could be useful to explore when learning. However, these can clutter up the file tree in the IDE. + +There is an easy was to hide these files in the workspace, without deleting them from the file system. + +To configure the following VSCode settings, it can be done in the global `settings.json` file, or for the projects workspace + +To set up a workspace, Do `"CMD+SHIFT+P" => "Preferences: Open Workspace Settings"`. This creates the `/.vscode/settings.json` file in the project directory. Note, you include the `.vscode` folder in the git history, unless a team is not sharing these settings. + +To find the settings file do `"CMD+SHIFT+P" => "Preferences: Open User Settings (JSON)"` or `"CMD+SHIFT+P" => "Preferences: Open Workspace Settings"`. + +Add the following options for file ignoring to the VSCode `settings.json`: +```json +{ + "files.exclude": { + "**/.git": true, + "**/.DS_Store": true, + "**/*.js.map": true, + "**/*.js": {"when": "$(basename).ts"} + } +} +``` + +Or + +```json +{ + "files.exclude": { + "**/.git": true, + "**/.DS_Store": true, + "**/*.d.ts": { + "when": "$(basename).ts" + }, + "**/*.js": { + "when": "$(basename).ts" + }, + "**/*.js.map": { + "when": "$(basename)" + } + } +} + ``` + + +## Using .gitignore + +When using git for your TypeScript project, you are not committing the compiled JavaScript code. You most likely will ignore the files generated, so you can configure that in the `.gitignore`. Be aware of needed files however, for example a `jest.config.js` file. + +Keeping the source `.ts` files, a `.gitignore` file may look like this: + +```diff +*.js +!jest.config.js +*.d.ts +node_modules +!lambda/*.js +``` + +## Configure Format On Save + +Optionally, you can avoid wasting time on formatting with `formatOnSave`. Add the following key to your `settings.json` file to enable: + +```json +"[typescript]": { + "editor.defaultFormatter": "esbenp.prettier-vscode", + "editor.formatOnSave": true + } +``` + +## Conclusion + +That's it! You can now begin development for your new TypeScript project, with auto-compile and format-on-save enabled. The main purpose of this article was to document the setup project for my own use, and strengthen my understanding by teaching. + +--- + +That’s all for today, I hope this article was helpful. If you have any questions, feel free to connect with me. + +Follow my journey or connect with me here: +- LinkedIn: [/in/spencerlepine](https://www.linkedin.com/in/spencerlepine/) +- Email: [spencer.sayhello@gmail.com](mailto:spencer.sayhello@gmail.com) +- Portfolio: [spencerlepine.com](https://spencerlepine.com) +- https://github.com/spencerlepine +- Twitter: [@spencerlepine](https://twitter.com/spencerlepine) diff --git a/content/typescript-development-set-up-for-vscode/index.md b/content/typescript-development-set-up-for-vscode/index.md new file mode 100644 index 0000000..60eafd7 --- /dev/null +++ b/content/typescript-development-set-up-for-vscode/index.md @@ -0,0 +1,136 @@ +--- +title: TypeScript Development Set Up for VSCode +slug: typescript-development-set-up-for-vscode +tags: [Workflow, VSCode, TypeScript] +authors: [spencerlepine] +date: 2022-10-24T12:00 +--- + +![](./thumbnail.jpg) + +Looking to set up VSCode for a TypeScript project? This article will walk through the initial configuration steps for just that. + +This walk-through assumes you have the already have the following installed: + +- [VSCode](https://code.visualstudio.com/) +- [Node.js](https://nodejs.org/en/download/) +- [npm](https://docs.npmjs.com/downloading-and-installing-node-js-and-npm) + +To begin, let’s install the `typescript` module globally: + +```sh +$ npm install -g typescript +$ tsc --version +``` + +Great! Now we can code our `.ts` files. A small issue remains. In order to compile those files to `.js`, we must run `tsc` or `npm run build` manually. To avoid this, we can +auto-compile **on save** in our IDE. + +## Configure Auto-Compile on Save + +Using the build tasks in VSCode, you can trigger auto-compile on save. This can be done through the settings search bar, or the application's menu bar in macOS. + +### Option 1: + +- Enter `CMD+SHIFT+P` +- Select: `Tasks: Configure Default Build Task` +- Choose: `tsc: watch - tsconfig.json` + +### Option 2: + +- Find tabs: in OS VSCode menu bar +- Click: `Terminal` (open dropdown) +- Select: `Configure Default Build Task` +- Choose: `tsc watch` option + +Now during development, anytime you save the `.ts` files, it will generate `.js` output automatically. + +## Hiding Compiler Output In Editor File Tree + +After running `tsc build`, it will populate the directory with compiled files. You may find `.js`, `.js.map`, or `.d.ts.` files, which could be useful to explore when learning. +However, these can clutter up the file tree in the IDE. + +There is an easy was to hide these files in the workspace, without deleting them from the file system. + +To configure the following VSCode settings, it can be done in the global `settings.json` file, or for the projects workspace + +To set up a workspace, Do `"CMD+SHIFT+P" => "Preferences: Open Workspace Settings"`. This creates the `/.vscode/settings.json` file in the project directory. Note, you +include the `.vscode` folder in the git history, unless a team is not sharing these settings. + +To find the settings file do `"CMD+SHIFT+P" => "Preferences: Open User Settings (JSON)"` or `"CMD+SHIFT+P" => "Preferences: Open Workspace Settings"`. + +Add the following options for file ignoring to the VSCode `settings.json`: + +```json +{ + "files.exclude": { + "**/.git": true, + "**/.DS_Store": true, + "**/*.js.map": true, + "**/*.js": { "when": "$(basename).ts" } + } +} +``` + +Or + +```json +{ + "files.exclude": { + "**/.git": true, + "**/.DS_Store": true, + "**/*.d.ts": { + "when": "$(basename).ts" + }, + "**/*.js": { + "when": "$(basename).ts" + }, + "**/*.js.map": { + "when": "$(basename)" + } + } +} +``` + +## Using .gitignore + +When using git for your TypeScript project, you are not committing the compiled JavaScript code. You most likely will ignore the files generated, so you can configure that in the +`.gitignore`. Be aware of needed files however, for example a `jest.config.js` file. + +Keeping the source `.ts` files, a `.gitignore` file may look like this: + +```diff +*.js +!jest.config.js +*.d.ts +node_modules +!lambda/*.js +``` + +## Configure Format On Save + +Optionally, you can avoid wasting time on formatting with `formatOnSave`. Add the following key to your `settings.json` file to enable: + +```json +"[typescript]": { + "editor.defaultFormatter": "esbenp.prettier-vscode", + "editor.formatOnSave": true + } +``` + +## Conclusion + +That's it! You can now begin development for your new TypeScript project, with auto-compile and format-on-save enabled. The main purpose of this article was to document the setup +project for my own use, and strengthen my understanding by teaching. + +--- + +That’s all for today, I hope this article was helpful. If you have any questions, feel free to connect with me. + +Follow my journey or connect with me here: + +- LinkedIn: [/in/spencerlepine](https://www.linkedin.com/in/spencerlepine/) +- Email: [spencer.sayhello@gmail.com](mailto:spencer.sayhello@gmail.com) +- Portfolio: [spencerlepine.com](https://spencerlepine.com) +- https://github.com/spencerlepine +- Twitter: [@spencerlepine](https://twitter.com/spencerlepine) diff --git a/blog/typescript-development-set-up-for-vscode/typescript-development-set-up-for-vscode.md b/content/typescript-development-set-up-for-vscode/index.medium similarity index 88% rename from blog/typescript-development-set-up-for-vscode/typescript-development-set-up-for-vscode.md rename to content/typescript-development-set-up-for-vscode/index.medium index a2cdfaf..3029280 100644 --- a/blog/typescript-development-set-up-for-vscode/typescript-development-set-up-for-vscode.md +++ b/content/typescript-development-set-up-for-vscode/index.medium @@ -1,9 +1,7 @@ --- title: TypeScript Development Set Up for VSCode -slug: typescript-development-set-up-for-vscode -tags: [Workflow, VSCode, TypeScript] -authors: [spencerlepine] -date: 2022-10-24T12:00 +description: Brief walkthrough for TypeScript development in VSCode +publish_status: "draft" --- ![](./thumbnail.jpg) @@ -120,8 +118,11 @@ That's it! You can now begin development for your new TypeScript project, with a --- -That’s all for today, I hope this article was helpful. If you have any questions, feel free to connect with me. You can find my profiles here: -* [Twitter (@spencerlepine)](https://twitter.com/SpencerLepine) -* [GitHub (@spencerlepine)](https://github.com/spencerlepine) -* [LinkedIn](https://www.linkedin.com/in/spencer-lepine/) -* [YouTube (Spencer Lepine)](https://www.youtube.com/channel/UCBL6vAHJZqUlyJp-rcFU55Q) +That’s all for today, I hope this article was helpful. If you have any questions, feel free to connect with me. + +Follow my journey or connect with me here: +- LinkedIn: [/in/spencerlepine](https://www.linkedin.com/in/spencerlepine/) +- Email: [spencer.sayhello@gmail.com](mailto:spencer.sayhello@gmail.com) +- Portfolio: [spencerlepine.com](https://spencerlepine.com) +- https://github.com/spencerlepine +- Twitter: [@spencerlepine](https://twitter.com/spencerlepine) diff --git a/blog/typescript-development-set-up-for-vscode/thumbnail.jpg b/content/typescript-development-set-up-for-vscode/thumbnail.jpg similarity index 100% rename from blog/typescript-development-set-up-for-vscode/thumbnail.jpg rename to content/typescript-development-set-up-for-vscode/thumbnail.jpg diff --git a/blog/what-i-learned-during-100-days-of-code/what-i-learning-during-100-days-of-code.md b/content/what-i-learned-during-100-days-of-code/index.devto similarity index 80% rename from blog/what-i-learned-during-100-days-of-code/what-i-learning-during-100-days-of-code.md rename to content/what-i-learned-during-100-days-of-code/index.devto index ba360c3..912919e 100644 --- a/blog/what-i-learned-during-100-days-of-code/what-i-learning-during-100-days-of-code.md +++ b/content/what-i-learned-during-100-days-of-code/index.devto @@ -1,9 +1,8 @@ --- title: What I Learned During 100DaysOfCode -slug: what-i-learned-during-100-days-of-code -tags: [Git, Scripts, Terminal, Commands, GitHub] -authors: [spencerlepine] -date: 2021-06-26T12:00 +description: Summary for a 100 day coding challenge. # Dev.to +tags: 'junior engineer, challenge, learning' # Dev.to (Max 3) +canonical_url: null # Dev.to --- ![Blog Post Thumbnail](./thumbnail.jpg) @@ -12,8 +11,9 @@ I recently completed a popular challenge on Twitter named #100DaysOfCode. There In my case, it was not my first time coding when I started this. However, the challenge helped me stay dedicated and build upon each skill I learned. -Day 1–10: The Challenge Begins! -=============================== +To see the original README journal, see [this repository](https://github.com/spencerlepine/100-days-of-code) + +## Day 1–10: The Challenge Begins! It all started with a ton of useful lessons on ES6 skills. There was a lot of new syntax and tricks that I never knew about. Since I started learning Javascript before ES6 was released, there were quite a few features I needed to practice. @@ -29,8 +29,7 @@ To wrap up this part of the challenge, there were some coding challenges from Cy [spencerlepine/cyber-dojo-exercises - GitHub](https://github.com/spencerlepine/cyber-dojo-exercises) -Day 11–20: Just Getting Started -=============================== +## Day 11–20: Just Getting Started At this point, there was some great momentum and I was always feeling more motivated to learn. The Scrimba React course had a lot of great lessons on all the basics of React. @@ -50,8 +49,7 @@ After discovering the behemoth of AWS products, I wasted no time working to set Just before reaching day 20, a worked on an in-browser [Chess React](https://github.com/spencerlepine/react-chess) app. This was just another opportunity to practice Javascript and apply knowledge. -Day 21–30: Gaining Momentum! -============================ +## Day 21–30: Gaining Momentum! With a solid understanding of React, it was time to double down on more computer science fundamentals. It was exciting to build toy projects, but that wouldn’t be enough. @@ -63,8 +61,7 @@ Alongside CS50 was more material about React memo and Context API — write a co Day 29 was the day I started working on Galvanize basic prep lessons. -Day 31–40: Diving Deeper -======================== +## Day 31–40: Diving Deeper CS50 included some lessons about data structures and SQL. I worked on some really good challenges for the CS50 Fiftyville assignment. @@ -76,8 +73,7 @@ With MongoDB connected to React, I was able to work on a MERN app that could rea Day 36 was when I began the QuickCar react app. Connected it to a backend MongoDB Atlas server. Add reducer + actions for post/fetch from frontend. Save the data in redux state. I created forms and routes with components connected to the Redux state using useSelectors. -Day 41–50: Working on QuickCart -=============================== +## Day 41–50: Working on QuickCart As I worked through some lessons for Galvanize basic prep, I also started lessons on [freeCodeCamp](https://www.freecodecamp.org/learn/javascript-algorithms-and-data-structures/). Got a better understanding of Regex from that. @@ -87,8 +83,7 @@ I revisited my AWS account. Started basic portfolio site using React — Set up With a working site, it would be really nice to have a blog hosted on the domain. I attempted to set up a blog. Researched Gatsby, GraphCMS, ButterCMS, GraphQL. Queries were not working. Unable to route to posts. This was a difficult concept to navigate, but along the way I learned a lot about apache and setting up ssl. -Day 51–60: A Working Blog! -========================== +## Day 51–60: A Working Blog! Gatsby.js was the answer. Instead of using a headless CMS, I settled with local md files to use for an article. During this challenge, I spent many hours working with the ‘mdx’ plugin. I tried to connect a CMS through queries and GraphQL, but it was very difficult. @@ -98,8 +93,7 @@ Flatiron wasn’t the only school that I was preparing for though. I completed m The last major assignment on hand was working through more of the freeCodeCamp Javascript fundamentals material. -Day 51–60: Coding Bootcamp! -=========================== +## Day 51–60: Coding Bootcamp! No, I didn’t start a coding bootcamp, but I was getting ready! I got accepted into Flatiron (they didn’t do a TAA anymore), accepted into General Assembly, and I was studying to pass my Hack Reactor technical assessment. My target was in August, which was over 100 days from this point. @@ -113,8 +107,7 @@ With all that work done, I began Hack Reactor premium prep. The technical admiss This is where I began working testing. Test driven development is integral to Software Engineering. I was able to appreciate my code much more after practicing it. -Day 71–80: Hack Reactor and Firebase -==================================== +## Day 71–80: Hack Reactor and Firebase At this point, I was making really good progress in my Hack Reactor prep work. I was able to learn the basic methodology of testing my code. Whenever I worked on Javascript challenges, it was important to practice writing tests and get used to the upfront setup. @@ -126,8 +119,7 @@ One of the most important themes of this week was practicing problem solving. Ha Now that I was able to write small tests for a challenge, I wanted to work on writing tests for a real project. I started learning how to use Enzyme and Jest for React test-driven-development. The process is tedious at the start, but it will ensure the code is more robust. This was difficult for me at first, because I wasn’t sure if my React test was written incorrectly, or I needed to pass the test now instead. There were lots of features to read Docs about, like redirecting, testing contexts/store, and tons more. -Day 81–90: Test Driven Development and APIs -=========================================== +## Day 81–90: Test Driven Development and APIs The TDD practice with React was the birth of Woofer. I wanted to work on a project and build it from the ground up writing tests. I did my best to write a test FIRST before writing code. This project did not go very far during my challenge, since there were other priorities. It was going to take a long time to complete the idea, and it was just good practice to implement testing. @@ -141,8 +133,7 @@ I stepped away from the deep rabbit hole of React TDD. I went back to QuickCart One feature I had been eager to add to the grocery app was “searching products”. With an empty form, users could manually input EVERY detail about a product, but nobody would want to use that. By connecting to the OpenFoodFacts API, I could search a dump of thousands of grocery products. This was an open source dump that anybody can contribute to. They also feature nutrition score data, so it can help users see healthier options. -Day 91–100: The Final Stretch -============================= +## Day 91–100: The Final Stretch After learning so much with Javascript and best-practices for writing code, it was time to USE these skills. I continued working through Hack Reactor prep material. I learned about higher order functions, scopes, and hoisting. There were lessons about terminal commands and important dev tools like homebrew. It was time to upgrade my developer workflow, and get familiar with industry standard software. @@ -154,22 +145,11 @@ The next Hack Reactor prep section was about Git and GitHub. This was a REALLY u More work on Mocha and Chai testing was done. Getting familiar with different libraries and how they have similar functions. -**Challenge Complete!** -======================= - -After 100 days of coding, I was able to explore new features and concepts relating to Javascript and software development. The dedication allowed me to keep my momentum and build upon what I learned the previous day. - -Interested in working together? -------------------------------- - -Let’s connect! Find me on any of my socials linked below: +## Conclusion -* [Twitter (@spencerlepine)](https://twitter.com/SpencerLepine) -* [GitHub (@spencerlepine)](https://github.com/spencerlepine) -* [YouTube (Spencer Lepine)](https://www.youtube.com/channel/UCBL6vAHJZqUlyJp-rcFU55Q) +Challenge complete! After 100 days of coding, I was able to explore new features and concepts relating to Javascript and software development. The dedication allowed me to keep my momentum and build upon what I learned the previous day. Here is an overview of everything I learned: --------------------------------------------- * ES6 Javascript * Functional vs. OPP programming @@ -193,81 +173,51 @@ Here is an overview of everything I learned: * Separating the front-end / back-end * Deploying to Heroku -**Projects:** -------------- +### Projects Explore more [projects](https://spencerlepine.com) — or check out the ones mentioned in the article: +- QuickCart + - Make a shopping list with personal grocery data to help budget. + - **Source Code:** [GitHub Repo](https://github.com/spencerlepine/quickcart) -### QuickCart - -Make a shopping list with personal grocery data to help budget. - -[GitHub Repo](https://github.com/spencerlepine/quickcart) - ------------------------ - - -### Portfolio Site - -Portfolio and blog website ( Visit Here) created by Spencer Lepine. Built using static pages created with GatsbyJS… - -[GitHub Repo](https://github.com/spencerlepine/spencerlepine.com) - ------------------------ - -### Cyber Dojo Exercises - -Personal solutions to various Cyber Dojo exercises. Code is written in Python and tests use 'asserts' with pytest. All… - -[GitHub Repo](https://github.com/spencerlepine/cyber-dojo-exercises) - ------------------------ - - -### Woofer - -Tinder for Pets Web App. Swipe and connect with other furry friends in the area. - -[GitHub Repo](https://github.com/spencerlepine/woofer) - ------------------------ - -### React Chess - -Play chess in the browser by with drag and drop moves. This was created using the Javascript React framework. component… - -[GitHub Repo](https://github.com/spencerlepine/react-chess) - ------------------------ - -### Study Garden - -Improve focus and discipline with this timer app. Study until the timer runs out and add plants to your personal… - -[GitHub Repo](https://github.com/spencerlepine/study-garden) - ------------------------ - - -### Spotify Top Songs - -Generate a Spotify playlist based on the top rated songs of your favorite artists. Connect user Spotify accounts to… +- Portfolio Site + - Portfolio and blog website ([Visit Here](https://www.spencerlepine.com)) created by Spencer Lepine. Built using static pages created with GatsbyJS… + - **Source Code:** [GitHub Repo](https://github.com/spencerlepine/portfolio-site-v2) -[GitHub Repo](https://github.com/spencerlepine/spotify-top-songs) +- Cyber Dojo Exercises + - Personal solutions to various Cyber Dojo exercises. Code is written in Python and tests use 'asserts' with pytest. All… + - **Source Code:** [GitHub Repo](https://github.com/spencerlepine/cyber-dojo-exercises) ------------------------ +- Woofer + - Tinder for Pets Web App. Swipe and connect with other furry friends in the area. + - **Source Code:** [GitHub Repo](https://github.com/spencerlepine/woofer) -### GitHub User Overview +- React Chess + - Play chess in the browser by with drag and drop moves. This was created using the Javascript React framework. component… + - **Source Code:** [GitHub Repo](https://github.com/spencerlepine/react-chess) -This React App allows the user to type a username get an overview of their GitHub repos using the GitHub REST API. +- Study Garden + - Improve focus and discipline with this timer app. Study until the timer runs out and add plants to your personal… + - **Source Code:** [GitHub Repo](https://github.com/spencerlepine/study-garden) -[GitHub Repo](https://github.com/spencerlepine/github-api-react) +- Spotify Top Songs + - Generate a Spotify playlist based on the top rated songs of your favorite artists. Connect user Spotify accounts to… + - **Source Code:** [GitHub Repo](https://github.com/spencerlepine/spotify-top-songs) ------------------------ +- GitHub User Overview + - This React App allows the user to type a username get an overview of their GitHub repos using the GitHub REST API. + - **Source Code:** [GitHub Repo](https://github.com/spencerlepine/github-api-react) -### CS50 Problem Sets +- CS50 Problem Sets + - My solutions to the online CS50 course generously provided by Harvard University. + - **Source Code:** [GitHub Repo](https://github.com/spencerlepine/cs50-problem-sets) -My solutions to the online CS50 course generously provided by Harvard University. +## Interested in working together? -[GitHub Repo](https://github.com/spencerlepine/cs50-problem-sets) +Follow my journey or connect with me here: +- LinkedIn: [/in/spencerlepine](https://www.linkedin.com/in/spencerlepine/) +- Email: [spencer.sayhello@gmail.com](mailto:spencer.sayhello@gmail.com) +- Portfolio: [spencerlepine.com](https://spencerlepine.com) +- https://github.com/spencerlepine +- Twitter: [@spencerlepine](https://twitter.com/spencerlepine) \ No newline at end of file diff --git a/content/what-i-learned-during-100-days-of-code/index.md b/content/what-i-learned-during-100-days-of-code/index.md new file mode 100644 index 0000000..cd7c74e --- /dev/null +++ b/content/what-i-learned-during-100-days-of-code/index.md @@ -0,0 +1,278 @@ +--- +title: What I Learned During 100DaysOfCode +slug: what-i-learned-during-100-days-of-code +tags: [Junior Engineer, Challenge, Learning] +authors: [spencerlepine] +date: 2021-06-26T12:00 +--- + +![Blog Post Thumbnail](./thumbnail.jpg) + +I recently completed a popular challenge on Twitter named #100DaysOfCode. There is no barrier to entry, and you just need to code for at least 1 hour a day. This is a great +challenge to motivate yourself and see your progress alongside others. + +In my case, it was not my first time coding when I started this. However, the challenge helped me stay dedicated and build upon each skill I learned. + +To see the original README journal, see [this repository](https://github.com/spencerlepine/100-days-of-code) + +## Day 1–10: The Challenge Begins! + +It all started with a ton of useful lessons on ES6 skills. There was a lot of new syntax and tricks that I never knew about. Since I started learning Javascript before ES6 was +released, there were quite a few features I needed to practice. + +[ ES6 Course - Scrimba](https://scrimba.com/learn/introtoes6) + +Having just completed a small course on the React framework, I worked on implementing ES6 skills I picked up to make small projects. Here is a simple React app connected to GitHubs +REST API. This helped me learn how to make fetch calls and save the data to state. + +[ spencerlepine/github-api-react - GitHub](https://github.com/spencerlepine/github-api-react) + +Each day I worked on different React concepts. Connecting/modifying a mock database locally. Working with state and props and pass data around. My most effective learning styles +was learning-by-doing. After reading the docs and seeing a new feature, I would realize how I could use it to improve my project. + +To wrap up this part of the challenge, there were some coding challenges from Cyber Dojo to help practice problem solving. I was able to solve “Align Columns”, “LCD Digits”, +“Wonderland Number”. Check out my repo with my solutions here: + +[spencerlepine/cyber-dojo-exercises - GitHub](https://github.com/spencerlepine/cyber-dojo-exercises) + +## Day 11–20: Just Getting Started + +At this point, there was some great momentum and I was always feeling more motivated to learn. The Scrimba React course had a lot of great lessons on all the basics of React. + +[The React Bootcamp - Scrimba](https://scrimba.com/learn/react) + +I built a controlled-form in React that would update state based on events and targets. Next up was the big concept of hooks and functional components. Little did I know this was +only the START of what is possible in React. + +The React course was complete, which led me to start a new project called Spotify Top Songs. This site would connect to the Spotify Web API with client-side authentication. When a +user connected their account, they could select various artists from a menu. The script would then generate a playlist by accessing the top 5 songs of each artist. + +This time, code was much more organized with the components and logic separated. With so many fetch calls and bits of logic to intertwine, it was important to build everything +slowly and cleanly. + +[spencerlepine/spotify-top-songs - GitHub](https://github.com/spencerlepine/spotify-top-songs) + +I learned about prop-types and default props, which can be pretty handy. Worked React higher order components and children components. Started learning about AWS S3 buckets. To +practice, I deployed my Spotify Top Songs to the S3 bucket. + +After discovering the behemoth of AWS products, I wasted no time working to set up an AWS EC2 instance to host a static site, which I would connect to my Route53 domain. + +Just before reaching day 20, a worked on an in-browser [Chess React](https://github.com/spencerlepine/react-chess) app. This was just another opportunity to practice Javascript and +apply knowledge. + +## Day 21–30: Gaining Momentum! + +With a solid understanding of React, it was time to double down on more computer science fundamentals. It was exciting to build toy projects, but that wouldn’t be enough. + +After finding the CS50 course online, I watched the lectures on C and Python. This was very useful to learn about memory pointers and how the interpreter reads code. Dave is a +GREAT teacher and I would highly recommend going through the course. + +Lots to learn about memory allocation, what libraries are doing under the hood, string manipulation, and regex. + +Alongside CS50 was more material about React memo and Context API — write a component to access a ‘global’ state in a separate file. Import that file and render through that in any +component of choice. Custom hooks that handle business logic. React router basics. Conditional rendering. All of these skills would allow me to start creating real multi-page +sites! + +Day 29 was the day I started working on Galvanize basic prep lessons. + +## Day 31–40: Diving Deeper + +CS50 included some lessons about data structures and SQL. I worked on some really good challenges for the CS50 Fiftyville assignment. + +It was time for me to learn how to connect a database. I began working with MongoDB and Node.js. I used Postman to practice making requests. + +With a lot of data to store in state, I needed a way to organize it all. Thats where React Redux came up — createStore, redux philosophies, subscribe, dispatch, combineReducer. +Abstracting reducers to handle each state in isolation. combineReducers to combine everything and handle state more cleanly in a rootReducer. + +With MongoDB connected to React, I was able to work on a MERN app that could read/write from the database. Connecting back-end/front-end routes and using controllers for the fetch +logic/requests. + +Day 36 was when I began the QuickCar react app. Connected it to a backend MongoDB Atlas server. Add reducer + actions for post/fetch from frontend. Save the data in redux state. I +created forms and routes with components connected to the Redux state using useSelectors. + +## Day 41–50: Working on QuickCart + +As I worked through some lessons for Galvanize basic prep, I also started lessons on [freeCodeCamp](https://www.freecodecamp.org/learn/javascript-algorithms-and-data-structures/). +Got a better understanding of Regex from that. + +Next came Javascript algorithm practice + Object Orientated Programming review. Inheriting methods from parent objects. Using Object.prototypes to reuse methods. + +I revisited my AWS account. Started basic portfolio site using React — Set up LAMP stack server to host website. Direct AWS Route53 domain to DigitalOcean droplet. Set up SSL, +MySQL, and WordPress. Working to set up a blog too. + +With a working site, it would be really nice to have a blog hosted on the domain. I attempted to set up a blog. Researched Gatsby, GraphCMS, ButterCMS, GraphQL. Queries were not +working. Unable to route to posts. This was a difficult concept to navigate, but along the way I learned a lot about apache and setting up ssl. + +## Day 51–60: A Working Blog! + +Gatsby.js was the answer. Instead of using a headless CMS, I settled with local md files to use for an article. During this challenge, I spent many hours working with the ‘mdx’ +plugin. I tried to connect a CMS through queries and GraphQL, but it was very difficult. + +Around this time I continued working through Learn.co lessons for Flatiron prep. There were two main sections on Javascript and Ruby. The Ruby material was interesting and they +even did a review on Object Oriented Programming. + +Flatiron wasn’t the only school that I was preparing for though. I completed my technical assessment by creating a basic HTML page with images and a clickable button. + +The last major assignment on hand was working through more of the freeCodeCamp Javascript fundamentals material. + +## Day 51–60: Coding Bootcamp! + +No, I didn’t start a coding bootcamp, but I was getting ready! I got accepted into Flatiron (they didn’t do a TAA anymore), accepted into General Assembly, and I was studying to +pass my Hack Reactor technical assessment. My target was in August, which was over 100 days from this point. + +Making sure I could get into these schools was my main priority, but I continued learning and working on projects on the side. + +I worked on building the QuickCart app and added tons of features. Import/Export a grocery database. Improve the UI and styles. Upload images for a product in the form. Convert +images to base64 strings. Work with file blobs and cropping images. Generate suggested products with a recommendation algorithm. Work on authentications between the front/back-end +for the MERN app. Add a “cart” to store items until purchase. + +At this point, the grocery app was still connected to MongoDB. I was able to use localStorage and save some user data, but I knew I needed EVERYTHING to be in the cloud. + +With all that work done, I began Hack Reactor premium prep. The technical admissions assessment was going to be difficult, so this would help me prepared my vocalization and +problem solving. + +This is where I began working testing. Test driven development is integral to Software Engineering. I was able to appreciate my code much more after practicing it. + +## Day 71–80: Hack Reactor and Firebase + +At this point, I was making really good progress in my Hack Reactor prep work. I was able to learn the basic methodology of testing my code. Whenever I worked on Javascript +challenges, it was important to practice writing tests and get used to the upfront setup. + +This week was my first attempt at the Hack Reactor TAA, and I passed! This was incredible, because I got to choose between GA, Flatiron, and Hack Reactor. + +I continued working on QuickCart on the side. There were some bug fixes and UI improvements that needed to be done. I migrated from MongoDB to Firebase for the backend. I had to +read through the docs and get familiar with Firestore. + +One of the most important themes of this week was practicing problem solving. Hack Reactor was really pushing best practices and effective communication. Each day I worked on a toy +problem/code challenge. With a timer running, I would work on my pseudo-code BEFORE starting and really explain/verbalize my thought process as much as I could. + +Now that I was able to write small tests for a challenge, I wanted to work on writing tests for a real project. I started learning how to use Enzyme and Jest for React +test-driven-development. The process is tedious at the start, but it will ensure the code is more robust. This was difficult for me at first, because I wasn’t sure if my React test +was written incorrectly, or I needed to pass the test now instead. There were lots of features to read Docs about, like redirecting, testing contexts/store, and tons more. + +## Day 81–90: Test Driven Development and APIs + +The TDD practice with React was the birth of Woofer. I wanted to work on a project and build it from the ground up writing tests. I did my best to write a test FIRST before writing +code. This project did not go very far during my challenge, since there were other priorities. It was going to take a long time to complete the idea, and it was just good practice +to implement testing. + +I documented/planned out the Woofer app before starting. I made wireframes and planned out the logic/routing ahead of time. It was taking WAY too much time to work with mock stores +and complex routing in testing. I even fell into certain anti-patterns with testing after researching about tests online. + +[spencerlepine/woofer - GitHub](https://github.com/spencerlepine/woofer) + +I stepped away from the deep rabbit hole of React TDD. I went back to QuickCart and migrated EVERYTHING to client-side. Now it was one complete React app with firebase +authentication built in. I was able to host this site on Heroku too, so anyone could use it. + +[spencerlepine/quickcart - GitHub](https://github.com/spencerlepine/quickcart) + +One feature I had been eager to add to the grocery app was “searching products”. With an empty form, users could manually input EVERY detail about a product, but nobody would want +to use that. By connecting to the OpenFoodFacts API, I could search a dump of thousands of grocery products. This was an open source dump that anybody can contribute to. They also +feature nutrition score data, so it can help users see healthier options. + +## Day 91–100: The Final Stretch + +After learning so much with Javascript and best-practices for writing code, it was time to USE these skills. I continued working through Hack Reactor prep material. I learned about +higher order functions, scopes, and hoisting. There were lessons about terminal commands and important dev tools like homebrew. It was time to upgrade my developer workflow, and +get familiar with industry standard software. + +Here I read about Node.js, npm, semvar, and modules. This was everything I need to know about how projects are set up and how developers are able to work together. There needs to +be structure and conventions throughout the code base so everyone is on the same page. + +I also added a some features to QuickCart on the side again. I connected a Google Custom Search Engine to allow image searches for a product. Instead of having the user upload or +snap a photo, they could simply link an existing photo. This allowed me to store the image sources with links, instead a long base64 string with image data. That would improve +scalability and prevent product images from being lost easily. + +The next Hack Reactor prep section was about Git and GitHub. This was a REALLY useful sections to go through because it will be the backbone of any project. Knowing how to properly +document and collaborate on a project makes everything so much more compatible. Your code needs to be readable and maintainable. People can review your code and merge branches to +improve the project. You could even checkout different commits and revert your code. Before I learned about these practices, I would always CTRL+Z my file and start over again, +wasting so much time. + +More work on Mocha and Chai testing was done. Getting familiar with different libraries and how they have similar functions. + +## Conclusion + +Challenge complete! After 100 days of coding, I was able to explore new features and concepts relating to Javascript and software development. The dedication allowed me to keep my +momentum and build upon what I learned the previous day. + +Here is an overview of everything I learned: + +- ES6 Javascript +- Functional vs. OPP programming +- Javascript algorithms and data structures +- Node.js +- Redux +- React +- SQL +- Python +- Comp Sci fundamentals +- MongoDB +- Firebase +- React Context, state, props, controlled forms +- CMS +- Hosting a static site +- Procedure for problem solving (pseudocode, breaking it down) +- Mocha/Chai testing +- React Jest/Enzyme testing +- Connecting to APIs (Spotify Web API, GitHub REST API, OpenFoodFacts API) +- Fetch calls + axios +- Separating the front-end / back-end +- Deploying to Heroku + +### Projects + +Explore more [projects](https://spencerlepine.com) — or check out the ones mentioned in the article: + +- QuickCart + + - Make a shopping list with personal grocery data to help budget. + - **Source Code:** [GitHub Repo](https://github.com/spencerlepine/quickcart) + +- Portfolio Site + + - Portfolio and blog website ([Visit Here](https://www.spencerlepine.com)) created by Spencer Lepine. Built using static pages created with GatsbyJS… + - **Source Code:** [GitHub Repo](https://github.com/spencerlepine/portfolio-site-v2) + +- Cyber Dojo Exercises + + - Personal solutions to various Cyber Dojo exercises. Code is written in Python and tests use 'asserts' with pytest. All… + - **Source Code:** [GitHub Repo](https://github.com/spencerlepine/cyber-dojo-exercises) + +- Woofer + + - Tinder for Pets Web App. Swipe and connect with other furry friends in the area. + - **Source Code:** [GitHub Repo](https://github.com/spencerlepine/woofer) + +- React Chess + + - Play chess in the browser by with drag and drop moves. This was created using the Javascript React framework. component… + - **Source Code:** [GitHub Repo](https://github.com/spencerlepine/react-chess) + +- Study Garden + + - Improve focus and discipline with this timer app. Study until the timer runs out and add plants to your personal… + - **Source Code:** [GitHub Repo](https://github.com/spencerlepine/study-garden) + +- Spotify Top Songs + + - Generate a Spotify playlist based on the top rated songs of your favorite artists. Connect user Spotify accounts to… + - **Source Code:** [GitHub Repo](https://github.com/spencerlepine/spotify-top-songs) + +- GitHub User Overview + + - This React App allows the user to type a username get an overview of their GitHub repos using the GitHub REST API. + - **Source Code:** [GitHub Repo](https://github.com/spencerlepine/github-api-react) + +- CS50 Problem Sets + - My solutions to the online CS50 course generously provided by Harvard University. + - **Source Code:** [GitHub Repo](https://github.com/spencerlepine/cs50-problem-sets) + +## Interested in working together? + +Follow my journey or connect with me here: + +- LinkedIn: [/in/spencerlepine](https://www.linkedin.com/in/spencerlepine/) +- Email: [spencer.sayhello@gmail.com](mailto:spencer.sayhello@gmail.com) +- Portfolio: [spencerlepine.com](https://spencerlepine.com) +- https://github.com/spencerlepine +- Twitter: [@spencerlepine](https://twitter.com/spencerlepine) diff --git a/content/what-i-learned-during-100-days-of-code/index.medium b/content/what-i-learned-during-100-days-of-code/index.medium new file mode 100644 index 0000000..fa2a6c9 --- /dev/null +++ b/content/what-i-learned-during-100-days-of-code/index.medium @@ -0,0 +1,222 @@ +--- +title: What I Learned During 100DaysOfCode +description: Summary for a 100 day coding challenge. +publish_status: "draft" +--- + +![Blog Post Thumbnail](./thumbnail.jpg) + +I recently completed a popular challenge on Twitter named #100DaysOfCode. There is no barrier to entry, and you just need to code for at least 1 hour a day. This is a great challenge to motivate yourself and see your progress alongside others. + +In my case, it was not my first time coding when I started this. However, the challenge helped me stay dedicated and build upon each skill I learned. + +To see the original README journal, see [this repository](https://github.com/spencerlepine/100-days-of-code) + +## Day 1–10: The Challenge Begins! + +It all started with a ton of useful lessons on ES6 skills. There was a lot of new syntax and tricks that I never knew about. Since I started learning Javascript before ES6 was released, there were quite a few features I needed to practice. + +[ ES6 Course - Scrimba](https://scrimba.com/learn/introtoes6) + +Having just completed a small course on the React framework, I worked on implementing ES6 skills I picked up to make small projects. Here is a simple React app connected to GitHubs REST API. This helped me learn how to make fetch calls and save the data to state. + +[ spencerlepine/github-api-react - GitHub](https://github.com/spencerlepine/github-api-react) + +Each day I worked on different React concepts. Connecting/modifying a mock database locally. Working with state and props and pass data around. My most effective learning styles was learning-by-doing. After reading the docs and seeing a new feature, I would realize how I could use it to improve my project. + +To wrap up this part of the challenge, there were some coding challenges from Cyber Dojo to help practice problem solving. I was able to solve “Align Columns”, “LCD Digits”, “Wonderland Number”. Check out my repo with my solutions here: + +[spencerlepine/cyber-dojo-exercises - GitHub](https://github.com/spencerlepine/cyber-dojo-exercises) + +## Day 11–20: Just Getting Started + +At this point, there was some great momentum and I was always feeling more motivated to learn. The Scrimba React course had a lot of great lessons on all the basics of React. + +[The React Bootcamp - Scrimba](https://scrimba.com/learn/react) + +I built a controlled-form in React that would update state based on events and targets. Next up was the big concept of hooks and functional components. Little did I know this was only the START of what is possible in React. + +The React course was complete, which led me to start a new project called Spotify Top Songs. This site would connect to the Spotify Web API with client-side authentication. When a user connected their account, they could select various artists from a menu. The script would then generate a playlist by accessing the top 5 songs of each artist. + +This time, code was much more organized with the components and logic separated. With so many fetch calls and bits of logic to intertwine, it was important to build everything slowly and cleanly. + +[spencerlepine/spotify-top-songs - GitHub](https://github.com/spencerlepine/spotify-top-songs) + +I learned about prop-types and default props, which can be pretty handy. Worked React higher order components and children components. Started learning about AWS S3 buckets. To practice, I deployed my Spotify Top Songs to the S3 bucket. + +After discovering the behemoth of AWS products, I wasted no time working to set up an AWS EC2 instance to host a static site, which I would connect to my Route53 domain. + +Just before reaching day 20, a worked on an in-browser [Chess React](https://github.com/spencerlepine/react-chess) app. This was just another opportunity to practice Javascript and apply knowledge. + +## Day 21–30: Gaining Momentum! + +With a solid understanding of React, it was time to double down on more computer science fundamentals. It was exciting to build toy projects, but that wouldn’t be enough. + +After finding the CS50 course online, I watched the lectures on C and Python. This was very useful to learn about memory pointers and how the interpreter reads code. Dave is a GREAT teacher and I would highly recommend going through the course. + +Lots to learn about memory allocation, what libraries are doing under the hood, string manipulation, and regex. + +Alongside CS50 was more material about React memo and Context API — write a component to access a ‘global’ state in a separate file. Import that file and render through that in any component of choice. Custom hooks that handle business logic. React router basics. Conditional rendering. All of these skills would allow me to start creating real multi-page sites! + +Day 29 was the day I started working on Galvanize basic prep lessons. + +## Day 31–40: Diving Deeper + +CS50 included some lessons about data structures and SQL. I worked on some really good challenges for the CS50 Fiftyville assignment. + +It was time for me to learn how to connect a database. I began working with MongoDB and Node.js. I used Postman to practice making requests. + +With a lot of data to store in state, I needed a way to organize it all. Thats where React Redux came up — createStore, redux philosophies, subscribe, dispatch, combineReducer. Abstracting reducers to handle each state in isolation. combineReducers to combine everything and handle state more cleanly in a rootReducer. + +With MongoDB connected to React, I was able to work on a MERN app that could read/write from the database. Connecting back-end/front-end routes and using controllers for the fetch logic/requests. + +Day 36 was when I began the QuickCar react app. Connected it to a backend MongoDB Atlas server. Add reducer + actions for post/fetch from frontend. Save the data in redux state. I created forms and routes with components connected to the Redux state using useSelectors. + +## Day 41–50: Working on QuickCart + +As I worked through some lessons for Galvanize basic prep, I also started lessons on [freeCodeCamp](https://www.freecodecamp.org/learn/javascript-algorithms-and-data-structures/). Got a better understanding of Regex from that. + +Next came Javascript algorithm practice + Object Orientated Programming review. Inheriting methods from parent objects. Using Object.prototypes to reuse methods. + +I revisited my AWS account. Started basic portfolio site using React — Set up LAMP stack server to host website. Direct AWS Route53 domain to DigitalOcean droplet. Set up SSL, MySQL, and WordPress. Working to set up a blog too. + +With a working site, it would be really nice to have a blog hosted on the domain. I attempted to set up a blog. Researched Gatsby, GraphCMS, ButterCMS, GraphQL. Queries were not working. Unable to route to posts. This was a difficult concept to navigate, but along the way I learned a lot about apache and setting up ssl. + +## Day 51–60: A Working Blog! + +Gatsby.js was the answer. Instead of using a headless CMS, I settled with local md files to use for an article. During this challenge, I spent many hours working with the ‘mdx’ plugin. I tried to connect a CMS through queries and GraphQL, but it was very difficult. + +Around this time I continued working through Learn.co lessons for Flatiron prep. There were two main sections on Javascript and Ruby. The Ruby material was interesting and they even did a review on Object Oriented Programming. + +Flatiron wasn’t the only school that I was preparing for though. I completed my technical assessment by creating a basic HTML page with images and a clickable button. + +The last major assignment on hand was working through more of the freeCodeCamp Javascript fundamentals material. + +## Day 51–60: Coding Bootcamp! + +No, I didn’t start a coding bootcamp, but I was getting ready! I got accepted into Flatiron (they didn’t do a TAA anymore), accepted into General Assembly, and I was studying to pass my Hack Reactor technical assessment. My target was in August, which was over 100 days from this point. + +Making sure I could get into these schools was my main priority, but I continued learning and working on projects on the side. + +I worked on building the QuickCart app and added tons of features. Import/Export a grocery database. Improve the UI and styles. Upload images for a product in the form. Convert images to base64 strings. Work with file blobs and cropping images. Generate suggested products with a recommendation algorithm. Work on authentications between the front/back-end for the MERN app. Add a “cart” to store items until purchase. + +At this point, the grocery app was still connected to MongoDB. I was able to use localStorage and save some user data, but I knew I needed EVERYTHING to be in the cloud. + +With all that work done, I began Hack Reactor premium prep. The technical admissions assessment was going to be difficult, so this would help me prepared my vocalization and problem solving. + +This is where I began working testing. Test driven development is integral to Software Engineering. I was able to appreciate my code much more after practicing it. + +## Day 71–80: Hack Reactor and Firebase + +At this point, I was making really good progress in my Hack Reactor prep work. I was able to learn the basic methodology of testing my code. Whenever I worked on Javascript challenges, it was important to practice writing tests and get used to the upfront setup. + +This week was my first attempt at the Hack Reactor TAA, and I passed! This was incredible, because I got to choose between GA, Flatiron, and Hack Reactor. + +I continued working on QuickCart on the side. There were some bug fixes and UI improvements that needed to be done. I migrated from MongoDB to Firebase for the backend. I had to read through the docs and get familiar with Firestore. + +One of the most important themes of this week was practicing problem solving. Hack Reactor was really pushing best practices and effective communication. Each day I worked on a toy problem/code challenge. With a timer running, I would work on my pseudo-code BEFORE starting and really explain/verbalize my thought process as much as I could. + +Now that I was able to write small tests for a challenge, I wanted to work on writing tests for a real project. I started learning how to use Enzyme and Jest for React test-driven-development. The process is tedious at the start, but it will ensure the code is more robust. This was difficult for me at first, because I wasn’t sure if my React test was written incorrectly, or I needed to pass the test now instead. There were lots of features to read Docs about, like redirecting, testing contexts/store, and tons more. + +## Day 81–90: Test Driven Development and APIs + +The TDD practice with React was the birth of Woofer. I wanted to work on a project and build it from the ground up writing tests. I did my best to write a test FIRST before writing code. This project did not go very far during my challenge, since there were other priorities. It was going to take a long time to complete the idea, and it was just good practice to implement testing. + +I documented/planned out the Woofer app before starting. I made wireframes and planned out the logic/routing ahead of time. It was taking WAY too much time to work with mock stores and complex routing in testing. I even fell into certain anti-patterns with testing after researching about tests online. + +[spencerlepine/woofer - GitHub](https://github.com/spencerlepine/woofer) + +I stepped away from the deep rabbit hole of React TDD. I went back to QuickCart and migrated EVERYTHING to client-side. Now it was one complete React app with firebase authentication built in. I was able to host this site on Heroku too, so anyone could use it. + +[spencerlepine/quickcart - GitHub](https://github.com/spencerlepine/quickcart) + +One feature I had been eager to add to the grocery app was “searching products”. With an empty form, users could manually input EVERY detail about a product, but nobody would want to use that. By connecting to the OpenFoodFacts API, I could search a dump of thousands of grocery products. This was an open source dump that anybody can contribute to. They also feature nutrition score data, so it can help users see healthier options. + +## Day 91–100: The Final Stretch + +After learning so much with Javascript and best-practices for writing code, it was time to USE these skills. I continued working through Hack Reactor prep material. I learned about higher order functions, scopes, and hoisting. There were lessons about terminal commands and important dev tools like homebrew. It was time to upgrade my developer workflow, and get familiar with industry standard software. + +Here I read about Node.js, npm, semvar, and modules. This was everything I need to know about how projects are set up and how developers are able to work together. There needs to be structure and conventions throughout the code base so everyone is on the same page. + +I also added a some features to QuickCart on the side again. I connected a Google Custom Search Engine to allow image searches for a product. Instead of having the user upload or snap a photo, they could simply link an existing photo. This allowed me to store the image sources with links, instead a long base64 string with image data. That would improve scalability and prevent product images from being lost easily. + +The next Hack Reactor prep section was about Git and GitHub. This was a REALLY useful sections to go through because it will be the backbone of any project. Knowing how to properly document and collaborate on a project makes everything so much more compatible. Your code needs to be readable and maintainable. People can review your code and merge branches to improve the project. You could even checkout different commits and revert your code. Before I learned about these practices, I would always CTRL+Z my file and start over again, wasting so much time. + +More work on Mocha and Chai testing was done. Getting familiar with different libraries and how they have similar functions. + +## Conclusion + +Challenge complete! After 100 days of coding, I was able to explore new features and concepts relating to Javascript and software development. The dedication allowed me to keep my momentum and build upon what I learned the previous day. + +Here is an overview of everything I learned: + +* ES6 Javascript +* Functional vs. OPP programming +* Javascript algorithms and data structures +* Node.js +* Redux +* React +* SQL +* Python +* Comp Sci fundamentals +* MongoDB +* Firebase +* React Context, state, props, controlled forms +* CMS +* Hosting a static site +* Procedure for problem solving (pseudocode, breaking it down) +* Mocha/Chai testing +* React Jest/Enzyme testing +* Connecting to APIs (Spotify Web API, GitHub REST API, OpenFoodFacts API) +* Fetch calls + axios +* Separating the front-end / back-end +* Deploying to Heroku + +### Projects + +Explore more [projects](https://spencerlepine.com) — or check out the ones mentioned in the article: + +- QuickCart + - Make a shopping list with personal grocery data to help budget. + - **Source Code:** [GitHub Repo](https://github.com/spencerlepine/quickcart) + +- Portfolio Site + - Portfolio and blog website ([Visit Here](https://www.spencerlepine.com)) created by Spencer Lepine. Built using static pages created with GatsbyJS… + - **Source Code:** [GitHub Repo](https://github.com/spencerlepine/portfolio-site-v2) + +- Cyber Dojo Exercises + - Personal solutions to various Cyber Dojo exercises. Code is written in Python and tests use 'asserts' with pytest. All… + - **Source Code:** [GitHub Repo](https://github.com/spencerlepine/cyber-dojo-exercises) + +- Woofer + - Tinder for Pets Web App. Swipe and connect with other furry friends in the area. + - **Source Code:** [GitHub Repo](https://github.com/spencerlepine/woofer) + +- React Chess + - Play chess in the browser by with drag and drop moves. This was created using the Javascript React framework. component… + - **Source Code:** [GitHub Repo](https://github.com/spencerlepine/react-chess) + +- Study Garden + - Improve focus and discipline with this timer app. Study until the timer runs out and add plants to your personal… + - **Source Code:** [GitHub Repo](https://github.com/spencerlepine/study-garden) + +- Spotify Top Songs + - Generate a Spotify playlist based on the top rated songs of your favorite artists. Connect user Spotify accounts to… + - **Source Code:** [GitHub Repo](https://github.com/spencerlepine/spotify-top-songs) + +- GitHub User Overview + - This React App allows the user to type a username get an overview of their GitHub repos using the GitHub REST API. + - **Source Code:** [GitHub Repo](https://github.com/spencerlepine/github-api-react) + +- CS50 Problem Sets + - My solutions to the online CS50 course generously provided by Harvard University. + - **Source Code:** [GitHub Repo](https://github.com/spencerlepine/cs50-problem-sets) + +## Interested in working together? + +Follow my journey or connect with me here: +- LinkedIn: [/in/spencerlepine](https://www.linkedin.com/in/spencerlepine/) +- Email: [spencer.sayhello@gmail.com](mailto:spencer.sayhello@gmail.com) +- Portfolio: [spencerlepine.com](https://spencerlepine.com) +- https://github.com/spencerlepine +- Twitter: [@spencerlepine](https://twitter.com/spencerlepine) \ No newline at end of file diff --git a/blog/what-i-learned-during-100-days-of-code/thumbnail.jpg b/content/what-i-learned-during-100-days-of-code/thumbnail.jpg similarity index 100% rename from blog/what-i-learned-during-100-days-of-code/thumbnail.jpg rename to content/what-i-learned-during-100-days-of-code/thumbnail.jpg diff --git a/docs/intro.md b/docs/intro.md deleted file mode 100644 index 45e8604..0000000 --- a/docs/intro.md +++ /dev/null @@ -1,47 +0,0 @@ ---- -sidebar_position: 1 ---- - -# Tutorial Intro - -Let's discover **Docusaurus in less than 5 minutes**. - -## Getting Started - -Get started by **creating a new site**. - -Or **try Docusaurus immediately** with **[docusaurus.new](https://docusaurus.new)**. - -### What you'll need - -- [Node.js](https://nodejs.org/en/download/) version 18.0 or above: - - When installing Node.js, you are recommended to check all checkboxes related to dependencies. - -## Generate a new site - -Generate a new Docusaurus site using the **classic template**. - -The classic template will automatically be added to your project after you run the command: - -```bash -npm init docusaurus@latest my-website classic -``` - -You can type this command into Command Prompt, Powershell, Terminal, or any other integrated terminal of your code editor. - -The command also installs all necessary dependencies you need to run Docusaurus. - -## Start your site - -Run the development server: - -```bash -cd my-website -npm run start -``` - -The `cd` command changes the directory you're working with. In order to work with your newly created Docusaurus site, you'll need to navigate the terminal there. - -The `npm run start` command builds your website locally and serves it through a development server, ready for you to view at http://localhost:3000/. - -Open `docs/intro.md` (this page) and edit some lines: the site **reloads automatically** and displays your changes. diff --git a/docusaurus.config.js b/docusaurus.config.js index 033c70b..94b9658 100644 --- a/docusaurus.config.js +++ b/docusaurus.config.js @@ -41,6 +41,7 @@ const config = { ({ docs: false, // Optional: disable the docs plugin blog: { + path: 'content', routeBasePath: '/', showReadingTime: true, // Please change this to your repo. @@ -77,7 +78,7 @@ const config = { items: [ { label: 'GitHub', - to: 'https://github.com/spencerlepine/blog', + href: 'https://github.com/spencerlepine/blog', }, ], }, @@ -108,15 +109,15 @@ const config = { metadata: [ { property: 'og:title', - content: 'Spencer Lepine | Software Engineer', + content: 'Blog - Spencer Lepine', }, { property: 'og:image', - content: 'https://spencerlepine.github.io/images/thumbnail.jpg', + content: 'https://spencerlepine.github.io/blog/img/social-card-thumbnail.jpg', }, { property: 'og:description', - content: 'Website of Spencer Lepine, a Software Engineer.', + content: 'Developer blog of Spencer Lepine, a Software Engineer.', }, { name: 'twitter:card', @@ -124,15 +125,15 @@ const config = { }, { name: 'twitter:title', - content: 'Spencer Lepine | Software Engineer', + content: 'Blog | Spencer Lepine', }, { name: 'twitter:image', - content: 'https://spencerlepine.github.io/images/thumbnail.jpg', + content: 'https://spencerlepine.github.io/blog/img/social-card-thumbnail.jpg', }, { name: 'twitter:description', - content: 'Website of Spencer Lepine, a Software Engineer.', + content: 'Developer blog of Spencer Lepine, a Software Engineer.', }, ], }), diff --git a/package.json b/package.json index 98b755f..c777ab6 100644 --- a/package.json +++ b/package.json @@ -30,7 +30,7 @@ }, "husky": { "hooks": { - "pre-commit": "lint-staged" + "pre-commit": "yarn build && lint-staged" } }, "lint-staged": { diff --git a/sidebars.js b/sidebars.js deleted file mode 100644 index 3327580..0000000 --- a/sidebars.js +++ /dev/null @@ -1,33 +0,0 @@ -/** - * Creating a sidebar enables you to: - - create an ordered group of docs - - render a sidebar for each doc of that group - - provide next/previous navigation - - The sidebars can be generated from the filesystem, or explicitly defined here. - - Create as many sidebars as you want. - */ - -// @ts-check - -/** @type {import('@docusaurus/plugin-content-docs').SidebarsConfig} */ -const sidebars = { - // By default, Docusaurus generates a sidebar from the docs folder structure - tutorialSidebar: [{type: 'autogenerated', dirName: '.'}], - - // But you can create a sidebar manually - /* - tutorialSidebar: [ - 'intro', - 'hello', - { - type: 'category', - label: 'Tutorial', - items: ['tutorial-basics/create-a-document'], - }, - ], - */ -}; - -export default sidebars; diff --git a/src/pages/markdown-page.md b/src/pages/markdown-page.md deleted file mode 100644 index 9756c5b..0000000 --- a/src/pages/markdown-page.md +++ /dev/null @@ -1,7 +0,0 @@ ---- -title: Markdown page example ---- - -# Markdown page example - -You don't need React to write simple standalone pages.