Today is the first day of my internship and first meeting. A little excited. We met with my mentor before the meeting. We talked a little bit about the process. He told me what kind of project he would give me. He told me where I would come from in this project. He explained the competencies I will gain during this project. I believe that I will have a good process in which I will improve myself.
In today's meeting, we talked about the details of the project. We planned our project. I got several assignments for the first step of the project. I started keeping my diary. My mentor has opened a Github account that I will use throughout this project. He gave me the information of this account. A verification code was required to enter the Github account, but I could not log in to the account because I did not have mail open.
Problems are the first thing we encounter in real life. The password given to me to log in to the e-mail address was incorrect. That's why we had a short conversation and solved the problem with the mail password. Then I logged in to the email address and github account with the email and password information given to me. I did research on how to write the read me section on the Github account. As a result of my research, I started to enter the logs in the read me section. I installed Oracle Virtualbox on my computer. I did research on how to install ubuntu 18.04 on virtualbox. As a result of my research, I installed Ubuntu 18.04 on Oracle Virtualbox. In the problem determination section, I first identified the problem. The camera performs incorrect classification when classifying. Then I answered other questions. These questions are: How are the images obtained, where is the error, why is there a possibility of misclassification, and finally, I answered the questions of what kind of solution should be applied to solve this problem.
I have migrated the logs to the github account. I have done research for the architecture I will design. I looked at Rabbitmq. I looked at Kafka. I decided to use rabbit mq in architecture. I kept getting errors while trying to install Ubuntu 18.04 on Oracle VM Virtualbox. I got ubi-part crashed error. I searched the internet and started trying a few things I found. I failed again. Then I got help from a friend. While trying different ways again, my friend found out why the error was caused. Our error was that when opening Oracle VM Virtualbox, we had to open it as "run as administrator". I missed something so simple. It has been a good experience for future applications. I managed to install Ubuntu 18.04 version. I started watching the Linux course on Udemy. Here I learned how to install ubuntu. I learned some commands used in Terminal.
After I had the meeting we always had, I took notes of the tasks I would do during the day. First of all, I started watching the Linux course on Udemy. While watching the course, I also did the applications. I came to the 3rd episode until 12.30. Then I sent an e-mail about how far I had progressed in the course. I gave it a little. I've researched Kafka, RabbitMQ and Apache Nifi. At the end of my research, I wrote the Kafka and RabbitMQ comparison on my Github account.
I continued to follow the Linux course. I freed up space on my computer to use Ubuntu more comfortably. This process took quite a long time. As a result, I opened the area I wanted. I couldn't make the Ubuntu I installed full screen. I did some research but couldn't solve this problem. I tried to install RabbitMQ on the small screen, but I could not access the RabbitMQ Management page. I was sure that my ip address was correct, but it didn't work.
I tried installing RabbitMQ on Ubuntu. I followed the steps from the internet. I managed to open the RabbitMQ Management page in the browser, although I got minor errors. It did not accept username and password information. Later, while reinstalling ubuntu, I encountered a completely different error. I started all over again by deleting that ubuntu. I reinstalled ubuntu. This time I fixed the full screen issue that I couldn't do before.
We had our meeting today. In this meeting, we briefly evaluated the last week. We talked about what to do this week. I will install the technologies I will use until Friday on the computer. I will edit the Github account. I will explain the technologies I use step by step. While doing research, I need to do more detailed research. At the meeting we held in the afternoon, I understood better how to write the document. I don't need to write what's on the internet. I had to write down the errors I encountered.
I installed Ubuntu 18.04 Server on my computer. I wrote Apache Kafka, Apache Nifi and Rabbit MQ comparison on Github account. I've made quite a bit of progress in the Linux course. I participated in the 5 question challenge. I researched how Apache Kafka, Apache Nifi and RabbitMQ technologies do the listening process. I posted my research results on my Github account.
I edit the Github account every day. I take my daily notes. I did research on the project. I can hardly find the results I want. We held our meeting and created a task list. According to the task list, I first uninstalled Ubuntu Desktop and installed Ubuntu 18.04 Server Side instead. Using Ubuntu Server was a bit more difficult for me. I need to improve myself on this. I tried to install RabbitMQ on Ubuntu Server Side, but without success. I was constantly getting the error message. I searched on the internet but couldn't find it. So I put it aside for now and moved on to the Linux course. I came from the 25th part to the 61st part in the Linux course. I did not fully understand some things in the course. I think I will understand better as I use it.
After a long effort, I solved the error I got while installing Rabbitmq. I reinstalled it from Rabbitmq's web page. I did dataset research for my project. Then I researched downloading files on the server. According to the result I found, I downloaded the dataset to Ubuntu Server. I did research about cron job.
It gave "package installation" error while downloading pyhton3 on Ubuntu 18.04 Server. I added 'sudo add-apt-repository universe multiverse' command to solve this error. Then I tried to download java but that also gave "package installation" error. This time I added the 'sudo add-apt-repository main' command. After doing this, the installation was done.
I continue my research on cron job. I got an error while installing python pica on Ubuntu 18.04 Server Side. I got " cannot import name "sysconfig"" error. I ran sudo apt install python3.6-distutils to resolve this error. Thus, python pica was installed. I got an error while running rabbitmq. Unable to fix the error, I uninstalled and reinstalled Rabbitmq. This time it worked. I wrote and ran the send.py
and receive.py
files for Rabbitmq. I then created the new_task.py
and worker.py
files. I needed more screens to run this. In the meantime, the computer shut down and rabbitmq also stopped. When I reopened it, Rabbitmq didn't work. I did a lot of research on the internet and couldn't find any results. I made changes to the rabbitmq.conf file. I deleted these changes and restarted rabbitmq. This time it worked. I understood the reason for the error. It was giving an error when I made changes to the rabbitmq.conf file.
Today I searched for the function that reads all the images in the file. I was able to read all the images in the file with the os.listdir() method. I tried the examples found on Rabbitmq Tutorials. The second step of the tutorial needed to open a new window in Ubuntu 18.04 Server Side. I did research on this but couldn't find anything. I have written my code for now and it remains to test. I uploaded all the codes I wrote to my Github account.
Today I did a dataset search with a model. I found the KolektorSDD2 dataset. Then I found the pattern of this dataset.
I need to check if the model I found works. I looked at what is required to run the model. I researched if these required technologies are available in Ubuntu 18.04 Server Side. All available. These technologies are Pytorch and Cuda.
I searched for the cron job code that sends the photos in the file to the port. I read all the pictures in the file and keep them in a list, but I get an error when sending them to the port. I continue to research.
I can't progress my project. I keep getting errors and things to do are piling up. I look for errors on the internet, but the errors do not go away. When correcting one place, another error occurs. Today I tried to stop Pytorch, Cuda and Anaconda on Ubuntu 18.04 Server Side but without success.
Since I couldn't run the model of the dataset I found today, we changed the project a bit. We found the new dataset and model. Tensorflow was needed to run this model. I tried to install it on Windows but I couldn't. With the help of my mentor, we were able to install Tensorflow on Ubuntu 18.04 Server Side. We got constant errors while installing. We did a lot of work before installing Tensorflow. Then I asked myself why it was so hard to set it up. I will research about it. I will also share with you how I set it up. I wrote the model of Cifar-10 dataset in windows environment. I also added Rabbitmq Consumer and Prodecur codes into the model I wrote. I searched for methods to import this file into Ubuntu 18.04 Server Side. I couldn't do it with virtualbox shared folders. Then we could not find Desktop on Ubuntu 18.04 server. Finally, we tried with USB. We could not see the contents of the USB on the Server Side. Again, we tried different solutions for this, but file transfer with USB failed.
Tensorflow installation: https://emineozturkk.medium.com/ubuntu-18-04-install-tensorflow-dfcc3f904b81
I could not transfer the python code I wrote with USB to Ubuntu 18.04 Server Side. As a last resort, I wrote the code myself in the terminal one by one. I had some errors when I ran the code after writing it. I often had typos. The last code I wrote worked, but it didn't give the output I wanted.
Today I found the cause of the error. I forgot to start Rabbitmq before running the code. I ran Rabbitmq first. Then I ran the code and got the same error again. I started doing research. My understanding is that cifar10 cannot find the dataset.
I'm sick. That's why I couldn't look at the project.
I tried to install MongoDB but failed. For some reason, I have errors in the installation stages. Although I did as shown in the web resource, a successful installation does not occur.
I started editing the python code where the model is located. But the 'import tflearn' code returned ModuleNotFoundError. I did quite a bit of research to figure this out and tried what is shown. The result is still unsuccessful. Later, when I talked to my mentor, we found out what caused the problem. Ubuntu Server Side was not connecting to the internet. That's why it was giving an error. We immediately did research to resolve this issue. We tried a few things but it didn't work.
Today I checked again if Ubuntu Server Side connects to the internet. It still wouldn't connect. I continued my research. I made changes to some files but it still won't connect. So I deleted Ubuntu 18.04 Server Side and started installing Ubuntu 18.04 Desktop instead. I was able to successfully install it.
I installed Rabbitmq on Ubuntu 18.04 Desktop, which I had installed. I used it more comfortably because it is desktop. I was not able to copy paste in Ubuntu Server Side. I was writing everything by hand and it was extra difficult. Now Ubuntu Desktop is much more comfortable to use. After completing the rabbitmq installation, I installed python and tensorflow with venv. I didn't get an error while installing Tensorflow. I have successfully completed the installation.
After installing Tensorflow, the necessary setups to run the model have been completed. I immediately created a model.py file in venv. I wrote the model for the Cifar10 dataset. Then I tried to run the model, but some libraries were not imported. I added these missing libraries to venv with the help of pip3.
I was able to run the model for the cifar10 dataset without rabbitmq. The model runs at 50 iterations. It took quite a while as there were 50,000 images. It worked successfully. While the model was working, I started the presentation that was necessary for the presentation and describing my project. In the presentation, I will touch on every detail of the project.
After I finished running the model, I started researching how to save this model. I found several resources on this. In this way, I understood my dataset and model in more detail. I understood everything a little more.
I made adjustments to the model. I printed the accuracy rate of the model on the screen. My model gave an accuracy rate of 88 percent in 50 iterations. I started researching to save the model. I tried what I found but I couldn't save the model with .h5 extension.
I continued the research to save the model. I tried the sample codes I found, but it still didn't work as I wanted. When I type model.save(...), .index, .meta and .data files are created. I could not continue after that and create the .h5 file.
Today I prepared the project presentation. I created the Project user Manual file. I edited the Github account.