-
Notifications
You must be signed in to change notification settings - Fork 0
/
ece3400project.html
952 lines (933 loc) · 44.9 KB
/
ece3400project.html
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
<!DOCTYPE HTML>
<!--
Future Imperfect by HTML5 UP
html5up.net | @ajlkn
Free for personal and commercial use under the CCA 3.0 license (html5up.net/license)
-->
<html>
<head>
<title>Obstacle-Avoiding Robot - Grace Zhang</title>
<meta charset="utf-8" />
<meta name="viewport" content="width=device-width, initial-scale=1, user-scalable=no" />
<link rel="stylesheet" href="assets/css/main.css" />
<link rel="stylesheet" href="assets/css/w3.css">
</head>
<body class="single is-preload">
<!-- Wrapper -->
<div id="wrapper">
<!-- Header -->
<header id="header">
<h1><a href="index.html">Grace Zhang</a></h1>
<nav class="links" id="mylinks">
<ul>
<li><a href="index.html#projects">Projects</a></li>
<li><a href="index.html#experience">Experience</a></li>
<li><a href="index.html#about">About</a></li>
<li><a href="index.html#contact">Contact</a></li>
<li><a href="https://www.linkedin.com/in/gracezhang-ece/">LinkedIn</a></li>
</ul>
<!-- <a href="javascript:void(0);" class="icon" onclick="toHamburger()">
<i class="fa fa-bars"></i> -->
</nav>
</header>
<!-- Main -->
<div id="main">
<!-- Post -->
<article class="post">
<header>
<div class="title">
<h2 style="font-family:'Playfair Display';">Obstacle-Avoiding Robot</h2>
<p>
Over the course of four labs, I worked on a robot that used <b>photoresistors to follow light</b>, used
<b>passive and active filters</b> to react to specific frequencies, and an <b>ultrasonic sensor to navigate a maze</b>.
I used an <b>Arduino Nano Every</b> and <b>LTSpice</b> was used to simulate the filters.
This robot was a part of the ECE 3400: Intelligent Physical Systems course at Cornell University.
</p>
</div>
</header>
<iframe style="display: block; margin: auto; margin-bottom: 20px;" width="560" height="315" src="https://www.youtube.com/embed/aIKl1MrGgQI" title="Video of Jobert the robot following light and avoiding obstacles." frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe>
<div class="w3-content w3-display-container">
<span class="image featured"><img src="images/ece3400 robot.jpg" alt="" /></span>
<span class="image featured"><img src="images/lab4_graph2.jpg" alt="" /></span>
<span class="image featured"><img src="images/lab3 band pass circuit.jpg" alt="" /></span>
<button class="w3-button w3-black w3-display-left" onclick="plusDivs(-1)">❮</button>
<button class="w3-button w3-black w3-display-right" onclick="plusDivs(1)">❯</button>
</div>
<h3>Project Website</h3>
<p><a href="https://pages.github.coecis.cornell.edu/gtz4/3400-labs/">Check out my website here</a></p>
<h3>Lab 1: Light Following Robot Part 1</h3>
<!-- Objectives -->
<h4>Objectives: </h4>
<ul>
<li>Learn to program the Arduino.</li>
<li>Use photoresistors in conjunction with the Arduino to setup the light sensing component of your
light‐following robot keeping in mind the following behaviors:
</li>
<ul>
<li>When the robot will be in normal lighting conditions, or when there is too much light
everywhere, the robot will turn around in place, not knowing where to go.
</li>
<li>When bright (brighter) light hits the robot from one side or the other, it will move towards
it until the bright light is turned off again or the light faces the robot, at which point
the robot will move towards it.
</li>
</ul>
</ul>
<!-- Materials -->
<h4>Materials: </h4>
<ul>
<li>2 x CdS photoresistors</li>
<li>Arduino Nano Every</li>
<li>2 x 10kOhm resistors</li>
<li>Jumper wires</li>
<li>Breadboard</li>
<li>Smartphone flashlight</li>
</ul>
<!-- Part 1 -->
<h4>Part 1: Install the Arduino IDE: </h4>
<div>
I installed the Arduino IDE from <a href="https://www.arduino.cc/en/software">this link</a> and set
up all of the configurations.
</div>
<br>
<!-- Part 2 -->
<h3>Part 2: Controlling the Arduino's Onboard LED: </h3>
<div>
I compiled the code for the Blink sketch provided by Arduino and programmed the board with it to
ensure that the Arduino Nano Every was working properly. The code for this was given to us in Arduino
IDE. After uploading the code and positioning the board on the breadboard, the Arduino blinked on and
off as expected.
</div>
<br>
<!-- Part 3 -->
<h4>Part 3: CdS Photosensors: </h4>
<div>
<p>
I made a circuit based on the Fig. 5 in the lab document. R1, the pulldown resistor, was 10kOhms.
V<sub>in</sub> was connected to the +5V pin on the Arduino, the 0V was connected to the GND pin
on the Arduino, and the V<sub>out</sub> was connected to the analog pin A0.
</p>
<p>
To calculate the current drawn for the circuit, I first looked at the datasheet for the CdS
photosensor. The maximum resistance was 0.5MOhms when it was dark; when light, the maximum
resistance was 33kOhms and the minimum resistance was 16kOhms. Then, I just used Ohm's law V=IR
to find the current for each of these three resistances. The currents calculated are below:
</p>
<table class="center">
<tr>
<th>Lighting Condition </th>
<th> Total Resistance </th>
<th> Current</th>
</tr>
<tr>
<td>Dark</td>
<td>510kOhms</td>
<td>9.8uA</td>
</tr>
<tr>
<td>Light (dim)</td>
<td>43kOhms</td>
<td>0.12mA</td>
</tr>
<tr>
<td>Light (bright)</td>
<td>26kOhms</td>
<td>0.19mA</td>
</tr>
</table>
<br>
<div class="container-image">
<figure text-align="center">
<img src="images/lab1 fig5.jpg" alt="Lab 1 Figure 5" width="40%" height="auto">
<figcaption>Figure 5 from the Lab 1 document. </figcaption>
</figure>
</div>
<p>
To be able to read the output from the circuit, I downloaded <code>CdS_ReadA0.ino</code> from Canvas.
However, before I could properly understand the output from this sketch, I needed to determine
V<sub>ref</sub> to be able to find the value of V<sub>in</sub>, the signal we're trying to read,
from the equation below.
</p>
<div class="container-image">
<figure text-align="center">
<img src="images/lab1 result.jpg" alt="Result=(1023*Vin)/Vref" width="40%" height="auto">
<figcaption>ADC result equation from the Lab 1 document. </figcaption>
</figure>
</div>
<p>
I made the sketch <code>readADC_CTRLCbit.ino</code> from the code provided in the lab handout. When
I ran this code, the Serial port displayed <code>01</code>, as expected. This meant that V<sub>DD</sub>
was selected as the REFSEL for the ADC. To find V<sub>ref</sub>, I connected 3.3V directly to the A0 pin
and then measured the <code>analogRead</code> value from the A0 pin. The following is the data from the
Serial Monitor.
</p>
<div class="container-image">
<figure text-align="center">
<img class="resize" src="images/lab1 serial1.jpg" alt="715" width="40%" height="auto">
<figcaption>Serial output when V<sub>in</sub>=3.3V. </figcaption>
</figure>
</div>
<p>
Using the ADC result equation from earlier, we can now calculate that V<sub>ref</sub> is 4.72V.
</p>
<p>
For the original circuit in Figure 5 and the knowledge about V<sub>ref</sub>, I calculated possible
<code>analogRead</code> values. Since the circuit is a voltage divider, I used the voltage divider
equation to calculate the possible V<sub>out</sub> voltages. Then, I used this voltage as the
V<sub>in</sub> voltage in the ADC result equation above. The findings are in the table below.
</p>
<table class="center">
<tr>
<th>Lighting Condition </th>
<th> Voltage at A0 </th>
<th> analogRead Value</th>
</tr>
<tr>
<td>Dark</td>
<td>0.098V</td>
<td>21.24</td>
</tr>
<tr>
<td>Light (dim)</td>
<td>1.163V</td>
<td>252</td>
</tr>
<tr>
<td>Light (bright)</td>
<td>1.923V</td>
<td>417</td>
</tr>
</table>
<br>
<p>
By reconnecting the A0 pin to the V<sub>out</sub> of the circuit from Figure 5 and running the
<code>CdS_ReadA0.ino</code> sketch, I took the <code>analogRead</code> values when the sensors were
in normal conditions and then when a flashlight was pointed at it. The results are in the table below.
While the values from the previous table and this table don't match, this could be due to variations
due to the real life environment.
</p>
<table class="center">
<tr>
<th>Lighting Condition </th>
<th> analogRead Value</th>
</tr>
<tr>
<td>Dark</td>
<td>60</td>
</tr>
<tr>
<td>Normal</td>
<td>614</td>
</tr>
<tr>
<td>Flashlight</td>
<td>885</td>
</tr>
</table>
<br>
<div class="container-image">
<figure text-align="center">
<img class="resize" src="images/lab1 1sensor.jpg" alt="1 CdS" width="40%" height="auto">
<figcaption>Circuit with 1 CdS Photosensor. </figcaption>
</figure>
</div>
<p>
I then built a second circuit from Figure 5 and connected the V<sub>out</sub> of the circuit to the A1
pin of the Arduion.
</p>
<div class="container-image">
<figure text-align="center">
<img class="resize" src="images/lab1 2sensor.jpg" alt="1 CdS" width="40%" height="auto">
<figcaption>Circuit with 2 CdS Photosensor. </figcaption>
</figure>
</div>
</div>
<br>
<!-- Part 4 -->
<h4>Part 4: Coding to Control the Robot: </h4>
<div>
<p>
To avoid constant recalibration while moving the robot, I used Normalized Measurement for each sensor.
The equation for the left sensor is shown below.
</p>
<div class="container-image">
<figure text-align="center">
<img class="resize" src="images/lab1 nm.jpg" alt="NMleft" width="40%" height="auto">
<figcaption>Normalized Measurement for left sensor. </figcaption>
</figure>
</div>
<p>
I made a new sketch called <code>CdS_ReadA0A1</code>based on <code>CdS_ReadA0</code>. For this new
sketch, I added functionality to read the new analog pin A1. I mapped A0 to the left and A1 to the right.
I also added calculations for the normalized measurements for both the left and right sides. To ensure that
there were no rounding errors for these variables, I had to cast the <code>analogRead</code> values to
floats during the calculation. I then wrote these values to the Serial Monitor to get an idea of what the
thresholds would be. There was a delay of 1 second between each print to the monitor.
</p>
<iframe style="display:block; margin:auto;" width="420" height="315"
src="https://www.youtube.com/embed/bvgcoYdtM2o"
frameborder="0"
allowfullscreen>
</iframe>
<p style="text-align:center">Video of the photoresistor activity ( https://www.youtube.com/watch?v=bvgcoYdtM2o ). </p>
</div>
<!-- Conclusion -->
<h4>Conclusion: </h4>
<div>
<p>
This lab was fairly simple and I was able to get everything working without much difficulty. The only
confusing part was that the values calculated for the <code>analogRead</code> for different lighting
conditions was very different than what I actually saw on the Serial monitor. However, I believe that I
have a good base for the next lab of getting the robot to move according to these sensors.
</p>
</div>
<h3>Lab 2: Light Following Robot Pt. 2</h3>
<!-- Objectives -->
<h4>Objectives: </h4>
<ul>
<li>Integrate Lab 1 CdS photoresistors with the motors and h-bridge and turn it into a light-following robot.</li>
<li>The robot will do the following:</li>
<ul>
<li>
In normal or too bright lighting conditions, the robot will turn around in place and blink its
onboard LED on for 500ms and off for 500ms.
</li>
<li>
When bright (brighter) light hits the robot from one side or the other, it will move towards it
until the bright light is turned off again or the light faces the robot, at which point the robot
will move towards it in a straight line. The onboard LED stops blinking.
</li>
<li>
The above cycle is repeated depending on if the bright light is removed, or if the bright light
continues to shine, with the robot adapting as described above.
</li>
</ul>
</ul>
<!-- Constraints -->
<h4>Constraints: </h4>
<ul>
<li>Cannot use <code>delay()</code></li>
<li>Cannot use any external library</li>
<li>Cannot use interrupts</li>
</ul>
<!-- Materials -->
<h4>Materials: </h4>
<ul>
<li>2 x CdS photoresistors</li>
<li>Arduino Nano Every</li>
<li>2 x 10kOhm resistors</li>
<li>Jumper wires</li>
<li>Breadboard</li>
<li>Smartphone flashlight</li>
<li>L293D chip</li>
<li>AA and 9V batteries</li>
<li>Robot frame</li>
</ul>
<!-- Part 0 -->
<h4>Part 0: Playing with the ADC: </h4>
<div>
<p>
To begin, I had to figure out the timing of the ADC process. First, I found the default value of the
prescaler by making a sketch that read the appropriate bits from the CTRLC register of the ADC. The bits
I needed from ADC0.CTRLC were bits 0, 1, and 2. To do this, I simply modified the code from
<code>readADC_CTRLCbit</code> and changed the bit values that were being read. This new sketch was
<code>readADC_prescaler</code>. I found the prescaler value to be 0b0110, or 0x6. This means that the
default is CLK_PER divided by 128.
</p>
<p>
The next part of the lab was identifying the prescaler that fixes the value of CLK_PER. To do this, I read
bits 1 through 4 of the MCLKCTRLB register of CLKCTRL and found that it was 0b0000 or 0x0. This meant that
the prescaler value would be 2. However, the Prescaler Enable (PEN) bit was set to 0, meaning this
prescaler was not used. Thus, CLK_PER is the same frequency as CLK_MAIN, which is 16MHz.
</p>
<p>
With all of this information, I could calculate the default CLK_ADC value, which was 16MHz/128 = 125kHz.
The maximum ADC clock frequency is 1.5MHz, meaning that the smallest prescaler value is 16MHz/1.5MHz = 10.667.
This value is not a possible prescaler value, so the actual minimum possible prescaler would be 16.
</p>
<p>
After this, I tested various ADC prescaler values to see how fast the ADC could actually be operated. I
connected +3.3V to the A3 pin and ran <code>ADC_SingleConvClass</code>. I changed the prescaler value and
observed the reading of the A3 pin. The results are below.
</p>
<table class="center">
<tr>
<th>Prescaler Value </th>
<th> Serial Output</th>
</tr>
<tr>
<td>2</td>
<td>1023</td>
</tr>
<tr>
<td>4</td>
<td>1023</td>
</tr>
<tr>
<td>8</td>
<td>714</td>
</tr>
<tr>
<td>16</td>
<td>712</td>
</tr>
<tr>
<td>32</td>
<td>716</td>
</tr>
<tr>
<td>64</td>
<td>714</td>
</tr>
<tr>
<td>128</td>
<td>712</td>
</tr>
<tr>
<td>256</td>
<td>712</td>
</tr>
</table>
<p>
The output failed to give the expected value when the prescaler was less than 8. This is probably because
when these prescalers are used, the ADC does not have maximum resolution, causing the recorded values to be
off.
</p>
</div>
<!-- Part 1 -->
<h4>Part 1: Familiarizing Myself with the H-Bridge: </h4>
<div>
<p>
I used figure 2 from the lab handout to connect properly connect the Arduino to the motor controller. Vcc1
was connected to 5V (from the Arduino) and Vcc2 of the motor controller was connected to the 4.5V battery.
All of the grounds were connected together. For the first motor, ENA was connected to Arduion pin D6 (one of
the PWM pins), IN1 was connected to D10, and IN2 was connected to D9. For the second motor, ENB was connected
to D5 (a PWM pin), IN3 was connected to D18, and IN4 was connected to D17.
</p>
<div class="container-image">
<figure text-align="center">
<img class="resize" src="images/lab2 hbridge pinout.jpg" alt="pinout" width="40%" height="auto">
<figcaption>Motor controller pinout. </figcaption>
</figure>
</div>
<div class="container-image">
<figure text-align="center">
<img class="resize" src="images/lab2 hbridge_connections.jpg" alt="hbridge connections" width="40%" height="auto">
<figcaption>Photo of one motor connected. </figcaption>
</figure>
</div>
<p>
I referenced the code provided in the handout and used <code>analogWrite()</code> and
<code>digitalWrite()</code> to make the motors spin in the following configuration:
</p>
<ul>
<li>Both wheels turning together forward.</li>
<li>Both wheels turning together backward.</li>
<li>Both wheels turning in opposite direction of one another.</li>
<li>Both wheels coming to a full stop.</li>
</ul>
<p>The photoresistors were not used for this part. </p>
<iframe style="display:block; margin:auto;" width="420" height="315"
src="https://www.youtube.com/embed/Q32PopC95-w"
frameborder="0"
allowfullscreen>
</iframe>
<p style="text-align:center">Video of the wheels spinning. </p>
</div>
<!-- Part 2 -->
<h4>Part 2: Calibrating the Motors: </h4>
<div>
<p>
For this part, I had to calibrate the motors so that the robot (named Jobert) would run in a straight line.
To do this, I simply coded the wheels to make the robot roll forward and looked at its behavior. If Jobert
turned, I added a small value to the pin connected to the output of the respective motor.
</p>
<iframe style="display:block; margin:auto;" width="420" height="315"
src="https://www.youtube.com/embed/Z3U6a_0KSbo"
frameborder="0"
allowfullscreen>
</iframe>
<p style="text-align:center">Video of the Jobert rolling in a straight line. </p>
</div>
<!-- Part 3 -->
<h4>Part 3: Incorporating the Photosensors: </h4>
<div>
<p>This was the part where everything came together. The following behavior was implemented: </p>
<ul>
<li>
When the robot is in normal lighting conditions, the robot will turn around in place, not knowing
where to go. The onboard LED on the Arduino will toggle every 500ms.
</li>
<li>
When bright light hits the robot from one side, the onboard LED will stop blinking and the robot
will move towards the bright light until the bright light is turned off again or the robot has turned
sufficiently so that it faces the light, at which point the robot will move towards it in a straight line.
</li>
<li>
The above cycle is repeated depending on if the bright light is removed or if the bright light continues
to shine, with the robot adapting as described above.
</li>
</ul>
<p>
I integrated the code from the previous of lab with the code from this lab. The code checked both the actual
sensor values as well as the normalized values to determine the motor behavior. I implemented functions for
driving forward, backwards, turning left, turning right, and stopping so that I could call them easily.
Since <code>delay()</code> was not allowed to be used, I made a separate function <code>nextState</code>
that kept track of the time and decided when to switch to the next state.
</p>
<iframe style="display:block; margin:auto;" width="420" height="315"
src="https://www.youtube.com/embed/towdL4oPLBU"
frameborder="0"
allowfullscreen>
</iframe>
<p style="text-align:center">Video of Jobert following light. </p>
</div>
<h3>Lab 3: Filtering and FFT</h3>
<!-- Objectives -->
<h4>Objectives: </h4>
<p>
Integrate and test passive and active filters using hardware and my computer and compare them to what is
predicted. A bandpass filter will be implemented and tested and will be put on the robot in the next lab.
</p>
<!-- Constraints -->
<h4>Constraints: </h4>
<ul>
<li>Cannot use <code>analogRead()</code></li>
</ul>
<!-- Materials -->
<h4>Materials: </h4>
<ul>
<li>Capacitors and resistors</li>
<li>Arduino Nano Every</li>
<li>Op amps</li>
<li>MATLAB</li>
<li>Speakers</li>
<li>Breadboard</li>
</ul>
<!-- Part 0 -->
<h4>Part 1: LTSpice Basics: </h4>
<div>
<p>
I made both a low pass and high pass filter in LTSpice using R=1.2kOhm and C=0.1uF. The cutoff frequency for
both of these filter was 1.3kHz and the dB cutoff value was -3dB (as usual). The results from running the
simulation are shown below.
</p>
<div class="container-image">
<figure text-align="center">
<img class="resize" src="images/lab3 low pass circuit.jpg" alt="low pass ltspice circuit" width="40%" height="auto">
<figcaption>Low pass LTSpice circuit. </figcaption>
</figure>
</div>
<div class="container-image">
<figure text-align="center">
<img class="resize" src="images/lab3 low pass.jpg" alt="low pass ltspice" width="40%" height="auto">
<figcaption>Low pass LTSpice simulation. </figcaption>
</figure>
</div>
<div class="container-image">
<figure text-align="center">
<img class="resize" src="images/lab3 low pass closeup.jpg" alt="low pass ltspice close" width="40%" height="auto">
<figcaption>Closeup of low pass LTSpice simulation. </figcaption>
</figure>
</div>
<div class="container-image">
<figure text-align="center">
<img class="resize" src="images/lab3 high pass circuit.jpg" alt="high pass ltspice circuit" width="40%" height="auto">
<figcaption>High pass LTSpice circuit. </figcaption>
</figure>
</div>
<div class="container-image">
<figure text-align="center">
<img class="resize" src="images/lab3 high pass.jpg" alt="high pass ltspice" width="40%" height="auto">
<figcaption>High pass LTSpice simulation. </figcaption>
</figure>
</div>
<div class="container-image">
<figure text-align="center">
<img class="resize" src="images/lab3 high pass closeup.jpg" alt="high pass ltspice close" width="40%" height="auto">
<figcaption>Closeup of high pass LTSpice simulation.</figcaption>
</figure>
</div>
</div>
<h4>Part 2: Building the Unamplified Microphone Circuit: </h4>
<div>
<p>
After unplugging the batteries and the Arduino, I recreated the circuit for the microphone without
amplification, paying close attentiont to the polarity of the microphone. I used R1= 3.3kOhm and C1 = 10uF,
as required in the lab handout. The output of this circuit was connected to pin AIN4 (PD4, A6, D20).
</p>
<div class="container-image">
<figure text-align="center">
<img class="resize" src="images/lab3 circuit1 schematic.jpg" alt="lab3 circuit1 schematic" width="40%" height="auto">
<figcaption>Unamplified microphone circuit schematic from lab handout. </figcaption>
</figure>
</div>
<div class="container-image">
<figure text-align="center">
<img class="resize" src="images/lab3 circuit1.jpg" alt="lab3 circuit1" width="40%" height="auto">
<figcaption>Unamplified microphone circuit.</figcaption>
</figure>
</div>
</div>
<h4>Part 3: Coding the Arduino and MATLAB to Characterize Circuits: </h4>
<div>
<p>
First, I set up the Arduino code, <code>freeRunADC_MATLAB.ino</code>. As mentioned before, we were not allowed
to use the function <code>analogRead()</code> since it was too slow. Instead, I manually set up the ADC to be
in Free Running mode. The sampled ADC values had to be converted to a 16 bit value before being sent to Serial.
</p>
<p>
After this was done, I set up the MATLAB code, <code>readData_INT_Canvas_gtz4.m</code> to fetch data from
the Arduino. This MATLAB file played a sound at 500Hz, collected the Arduino ADC data, ran an FFT on that
data, and plotted the data and its FFT. The graph is shown below.
</p>
<div class="container-image">
<figure text-align="center">
<img class="resize" src="images/lab3 500Hz.jpg" alt="lab3 500Hz" width="40%" height="auto">
<figcaption>Graph of unamplified signal at 500Hz in time and frequency domains. </figcaption>
</figure>
</div>
<p>
As seen in the graph above, the magnitude of the signal at 500Hz is very small. This is because there is
no amplification or filtering on the microphone circuit.
</p>
</div>
<h4>Part 4: Improving the Microphone Circuit: </h4>
<div>
<p>
I implemented the circuit from the lab handout as shown below. The resistor values are in the following table.
</p>
<div class="container-image">
<figure text-align="center">
<img class="resize" src="images/lab3 amplified circuit schematic.jpg" alt="lab3 circuit2 schematic" width="40%" height="40%">
<figcaption>Amplified microphone circuit schematic from lab handout. </figcaption>
</figure>
</div>
<table class="center">
<tr>
<th>Resistor/Capacitor </th>
<th> Value</th>
</tr>
<tr>
<td>R1, R4</td>
<td>3.3kOhm</td>
</tr>
<tr>
<td>R2, R3</td>
<td>10kOhm</td>
</tr>
<tr>
<td>R5</td>
<td>511kOhm</td>
</tr>
<tr>
<td>C1</td>
<td>10uF</td>
</tr>
</table>
<div class="container-image">
<figure text-align="center">
<img class="resize" src="images/lab3 circuit2.jpg" alt="lab3 circuit2" width="40%" height="40%">
<figcaption>Amplified microphone circuit. </figcaption>
</figure>
</div>
<p>
I used the same code as the previous section to obtain the time and frequency domain plots of the amplified
signal at 500Hz.
</p>
<div class="container-image">
<figure text-align="center">
<img class="resize" src="images/lab3 amplified 500Hz v4.jpg" alt="lab3 amplified 500Hz" width="40%" height="40%">
<figcaption>Graph of amplified signal at 500Hz in time and frequency domains. </figcaption>
</figure>
</div>
<p>
The gain of the amplified circuit was calculated by taking Vout/Vin from the amplified and unamplified
frequency plots at 500Hz. The gain turned out to be 146.6. The predicted gain was R5/R4, or 154.8.
</p>
</div>
<h4>Part 5: Testing the Low and High Pass Circuits: </h4>
<div>
<p>
After this, I then tested the low and high pass filters that were built in LTSpice in part 1. I first tested
the low pass filter by building the circuit and connecting the output of my microphone amplifier circuit
into the input of the low pass filter. I ran the same MATLAB code as before but changed the frequency to
run from 100Hz to 2000Hz. I saved the data from both the output of the amplified microphone circuit and the
output from the low pass filter and then divided these two data sets to find the frequency response. I graphed
this frequency response along with the gain from the LTSpice simulation to get the graph below.
</p>
<div class="container-image">
<figure text-align="center">
<img class="resize" src="images/lab3 amplified lpf exp v theor.jpg" alt="lab3 amplified lpf" width="40%" height="40%">
<figcaption>Graph of low pass filter theoretical vs. experimental frequency response.</figcaption>
</figure>
<p>
I did the same thing again but replaced the low pass filter with a higher pass filter to get the following
graph.
</p>
<div class="container-image">
<figure text-align="center">
<img class="resize" src="images/lab3 amplified hpf exp v theor.jpg" alt="lab3 amplified hpf" width="40%" height="40%">
<figcaption>Graph of high pass filter theoretical vs. experimental frequency response. </figcaption>
</figure>
</div>
<p>
Unfortunately, for both of these filters, the experimental frequency response was very far from the
theoretical. This may have been due to the breadboard/physical parts (if I nudged the parts, a drastically
different response would sometimes appear). Additionally, sampling with my laptop may have caused this problem;
looking at the frequency response through an oscilloscope or something might have been better.
</p>
</div>
<h4>Part 6: Banpass Filters: </h4>
<div>
<p>
In this part of the lab, I built a Butterworth 4-pole bandpass filter that was set to pass frequencies between
500Hz and 900Hz. The schematic is shown below. Since I couldn't find 9.1kOhm resistors, I used a 10kOhm
resistor instead.
</p>
<div class="container-image">
<figure text-align="center">
<img class="resize" src="images/lab3 band pass circuit schematic.jpg" alt="lab3 bandpass schematic" width="40%" height="40%">
<figcaption>Band pass circuit schematic from lab handout. </figcaption>
</figure>
</div>
<div class="container-image">
<figure text-align="center">
<img class="resize" src="images/lab3 band pass circuit.jpg" alt="band pass ltspice circuit" width="40%" height="40%">
<figcaption>Band pass LTSpice circuit. </figcaption>
</figure>
</div>
<p>
I then went through the same process as before and graphed the frequency response of the band pass filter.
While the filter was better than the low and high pass filters, it still wasn't matching what was expected,
probably for some of the same reasons as before.
</p>
<div class="container-image">
<figure text-align="center">
<img class="resize" src="images/lab3 amplified bandpass exp v theor.jpg" alt="lab3 bandpass" width="40%" height="40%">
<figcaption>Graph of band pass filter theoretical vs. experimental frequency response. </figcaption>
</figure>
</div>
</div>
<h4>Part 7: Running the FFT on the Arduino: </h4>
<div>
<p>
In the final part of this lab, I ran the FFT of the unfiltered, amplified microphone circuit on the Arduino.
Due to the fact that the filters were behaving weirdly, we only used the unfiltered output of the circuit.
</p>
<p>
To be able to run the FFT on the Arduino, I downloaded the Arduino FFT library. I then modified the Arduino
code (now <code>freeRunADC_ISR_gtz4.ino</code>) to be able to use the ISR to store the
<code>ADC0_read()</code> values and to print out the converted FFT values of 257 time samples to the Serial
monitor. These values were then copied and pasted into a MATLAB script that graphed the FFT output. The FFT
from various sounds are displayed below.
</p>
<div class="container-image">
<figure text-align="center">
<img class="resize" src="images/lab3pt3 500.jpg" alt="lab3 500Hz arduino" width="40%" height="40%">
<figcaption>Graph of amplified, unfiltered signal at 500Hz in time and frequency domains. </figcaption>
</figure>
</div>
<div class="container-image">
<figure text-align="center">
<img class="resize" src="images/lab3pt3 700.jpg" alt="lab3 700Hz" width="40%" height="40%">
<figcaption>Graph of amplified, unfiltered signal at 700Hz in time and frequency domains. </figcaption>
</figure>
</div>
<div class="container-image">
<figure text-align="center">
<img class="resize" src="images/lab3pt3 900.jpg" alt="lab3 900Hz" width="40%" height="40%">
<figcaption>Graph of amplified, unfiltered signal at 900Hz in time and frequency domains. </figcaption>
</figure>
</div>
</div>
<h3>Lab 4: Ultrasonic Sensors and All</h3>
<!-- Objectives -->
<h4>Objectives: </h4>
<p>
Combine everything so far and include an ultrasonic sensor (US) for ranging and obstacle sensing. This
lab is split into two parts: robot detecting two objects and robot navigating a maze.
</p>
<!-- Constraints -->
<h4>Constraints: </h4>
<p>Constraints for Demo 1:</p>
<ul>
<li>Cannot use any library other than the FFT library used in Lab 3</li>
<li>Cannot use functions like <code>attachInterrupt()</code></li>
<li>Cannot use the function <code>delay()</code> or any such blocking function or blocking coding.</li>
<li>Can use PWM for controlling the motors, but to avoid any problem with the timers
must use pins D3 (PF5) and D6 (PF4) to do PWM for the motors</li>
<li>Must use the amplified microphone circuit</li>
<li>For the video recording, the RX and TX LEDs must remain off at all times</li>
</ul>
<p>Constraints for Demo 2:</p>
<ul>
<li>Robot must not nativate too quickly. It should take the robot at least 3 seconds to travel 30 cm</li>
<li>All that is asked in Demo 2 must be visible and confirmed in the video.</li>
<li>Cannot use any library</li>
<li>Cannot use functions like <code>attachInterrupt()</code></li>
<li>Cannot use the function <code>delay()</code> or any such blocking function or blocking coding.</li>
<li>Can use PWM for controlling the motors, but to avoid any problem with the timers
must use pins D3 (PF5) and D6 (PF4) to do PWM for the motors</li>
<li>Must use the amplified microphone circuit</li>
<li>For the video recording, the RX and TX LEDs must remain off at all times</li>
</ul>
<!-- Materials -->
<h4>Materials: </h4>
<ul>
<li>2 x CdS photoresistors</li>
<li>Capacitors and resistors</li>
<li>Arduino Nano Every</li>
<li>Op amps</li>
<li>MATLAB</li>
<li>Speakers</li>
<li>Breadboard</li>
<li>Smartphone flashlight</li>
<li>L293D chip</li>
<li>AA and 9V batteries</li>
<li>Robot frame</li>
</ul>
<!-- Part 1 -->
<h4>Part 1: Robot Detecting Two Objects: </h4>
<div>
<p>
To begin I added the ultrasonic sensors. The pins were connected to 5V, GND, and pins 9 and 10 for the
echo and trigger pins, respectively. Jobert (the robot) remained the same as from previous labs.
</p>
<div class="container-image">
<figure text-align="center">
<img class="resize" src="images/lab4 robot.jpg" alt="lab4 robot" width="40%" height="40%">
<figcaption>Lab 4 Robot</figcaption>
</figure>
</div>
<p>
To ensure that the US was working, I used the code covered in class to take measurements at distances
between 2 and 50 centimeters from the sensor. The results are below. As you can see, the US was very accurate
for smaller distances and gradually became less accurate as the distances increased.
</p>
<div class="container-image">
<figure text-align="center">
<img class="resize" src="images/lab4_graph1.jpg" alt="lab4 graph1" width="40%" height="40%">
<figcaption>Graph of measured distance as a function of actual distance. </figcaption>
</figure>
</div>
<p>
After this, Jobert was coded to detect two objects after hearing a certain frequency. Boxes were placed
in the configuration below.
</p>
<div class="container-image">
<figure text-align="center">
<img class="resize" src="images/lab4_demo1_setup.jpg" alt="lab4 demo1 setup" width="40%" height="40%">
<figcaption>Picture of placement of obstacles for part 1 demo. </figcaption>
</figure>
</div>
<p>The requirements for this demo were: </p>
<ol>
<li>The robot will start in place, motionless, with its onboard LED off</li>
<li>When playing the sound file Demo1_sound.wav from Canvas, the robot will do the following
upon detecting the frequency of 550Hz:</li>
<ul>
<li>Turn around slowly (4-5 seconds). </li>
<li>
The robot will have the onboard LED on from the moment it detects the object until it stops
facing it.
</li>
<li>When the robot stops facing an obstacle, its onboard LED turns off.</li>
<li>Repeat for the second obstacle.</li>
<li>
After the robot returns to its starting position, it will remain motionless with the
LED off forever.
</li>
</ul>
</ol>
<div class="container-image">
<figure text-align="center">
<img class="resize" src="images/lab4_graph2.jpg" alt="lab4 graph1" width="40%" height="40%">
<figcaption>Spectrum of Demo1_sound.wav. </figcaption>
</figure>
</div>
<p>
Much of the software was very similar from the previous week's lab. The only thing that was added
was that the ultrasonic distance sensor was added and the interrupts were handled slightly differently
within the loop. Additionally, since delays and code blocks were not allowed, a <code>wait()</code>
function was added to keep track of timing for the US.
</p>
</div>
<h4>Part 2: Navigating Robot: </h4>
<div>
<p>
In the second demo, Jobert was coded to follow a light, much like in Lab 2, and also detect obstacles.
</p>
<div class="container-image">
<figure text-align="center">
<img class="resize" src="images/lab4_demo2_setup.jpg" alt="lab4 graph1" width="40%" height="40%">
<figcaption>Picture of placement of obstacles for part 2 demo. </figcaption>
</figure>
</div>
<p>The requirements for this demo were: </p>
<ol>
<li>The robot will start in place, motionless, with its onboard LED off, facing north</li>
<li>
The robot will follow a light to the left for approximately 30 cm. The onboard LED will turn off
at this point.
</li>
<li>
Following the path, the robot will follow the light to obstacle 1. When it gets within 5 cm of the
obstacle, the robot stops and the LED turns off.
</li>
<li>
The robot will only begin moving again when a light is shown on its right side. The robot will
follow the light and the LED will turn off. The robot will follow the path in the diagram above.
</li>
<li>
The robot will detect obstacle 2 at a distance of 30 cm and the LED will turn on. After the robot
is about 5 cm away from obstacle 2, the robot will stop and the LED will turn off.
</li>
<li>
A light will be shown on the left side, causing the robot to rotate left and face obstacle 1 again.
At this point, it will stop. The LED will be on for this entire time.
</li>
<li>
THe robot will then be lured north by the light, where it will encounter obstacle 3. The onboard LED
will be off and it will stop moving.
</li>
</ol>
<p>
To implement this part, I first edited the code to remove the FFT and audio sampling capabilites, since it
was not needd for this part of the lab. I then reran the code from Lab 1 to recalibrate the CdS photosensors.
Since the same area I used previously was not large enough for this demo, I had to change the settings to
work with a different lighting setup. Since I tested in the dark, the threshold values for light detection
were much lower.
</p>
<p>
After recalibrating the light detection capabilities, I created a state machine that ran through each of the
steps above. At every stage, I checked whether light was detected and whether an obstacle was detected. These
were done by calling functions that I made that handled these separately from the main code. Based on the
state and the surroundings of Jobert, I then decided his movement. The US code was the same as from demo 1.
At first, I tested this by moving the motor manually with my hands and adding print statements to determine
whether I was detecting the light and obstacles correctly and going through the staes correctly. After I was
sure that the logic was working correctly, I then added in the code to control the motors.
</p>
</div>
<iframe style="display:block; margin:auto;" width="420" height="315"
src="https://www.youtube.com/embed/aIKl1MrGgQI"
frameborder="0"
allowfullscreen>
</iframe>
<p style="text-align:center">Video of Jobert following light and avoiding obstacles. </p>
</article>
</div>
<!-- Footer -->
<section id="footer">
<p class="copyright"> <a href="http://html5up.net">HTML5 UP</a>. Images: <a href="http://unsplash.com">Unsplash</a>.</p>
</section>
</div>
<!-- Scripts -->
<script src="assets/js/jquery.min.js"></script>
<script src="assets/js/browser.min.js"></script>
<script src="assets/js/breakpoints.min.js"></script>
<script src="assets/js/util.js"></script>
<script src="assets/js/main.js"></script>
<script src="assets/js/slideshow.js"></script>
</body>
</html>