This web serves to document lab projects in ECE 5160 (Spring 2023).
Direct to Labs
Hello! This is Weizhe Zhao, a MEng student in ECE at Cornell. This website is used to document my work for the labs in ECE 5160: Fast Robots this spring.
The Artemis Board
The objective of this lab is to get familiar with Arduino development and the SparkFun RedBoard Artemis Nano board. It primarily consists of setting up the Arduino IDE on PC and testing several basic peripherals on the Artemis board: LED, serial reading and writing, onboard temperature sensor, and microphone.
For environment setup, the Arduino IDE 2.0.3 is installed on my computer (MacOS 13.0), which is used to write and upload code to the Artemis board. After that, the SparkFun Apollo3 board package is installed in the IDE via board manager. The board is hooked up to the computer with a USB-C cable for burning and testing.
The first task is to run a built-in example Blink, a simple program that blinks the LED of the Artemis on and off in 1 second intervals. I changed the original blinking cycle to 1 second by modifying the parameters of delay functions from 1000 ms to 500 ms.
Task 1: Blink it Up Demonstration
The second task is to run an Artemis example sketch that operates serial communication between the board and PC.
The MCU contains two UART peripherals to perform serial communication. In this sketch, the baud rate is set to be 115200. Upon running the program, the MCU counts from 0 to 9 then enters a loop where it listens and echoes messages received. In the demo video below, I sent "test message" to the MCU, and it echoed the characters back:
Task 2: Serial Communication
The second task is to run another Artemis example analogRead to test the temperature sensor of the MCU.
Upon running the example program, the MCU continuously reads data from the temperature sensor and prints the information to the serial monitor. In the demo video, I first let it read data under the room temperature, and the measure was around 33700 counts; then I put my hand on it, and the data increased to around 34200 counts. It took about 7 seconds to transfer the heat.
Task 3: Analog Read
In this task, an example sketch is run to test the onboard pulse density microphone (PDM). The one with the largest magnitude among the sensed frequencies is printed to the serial monitor.
In the demo video below, I first played the octave C1 to B1, the printed loudest frequencies were basically in the range of from 500 to 1400. Then I played the octave C7 to B7, which has a much higher pitch. The resultant data range was 2000-4000, indicating higher frequencies.
Task 4: Microphone Output
The additional task asks to build a music "A" note detector.
To realize this, I made some modifications on the example sketch in the last task by adding an if block to turn on the LED when the loudest frequency hits 881 (I did several tests to obtain this value of note A frequency) and setting the LED to low at the start of every loop. The delay function is used to make the result more visible.
Additional Task: Note A Detector
This lab mainly deals with Arduino IDE setup and tests several onboard peripherals by running and modifying example sketches on the Artemis board.
Bluetooth
This lab aims to establish Bluetooth connection and communicate between the Artemis board and PC.
I already had Python 3.10, pip and venv on my PC, so all I did to get the environment ready was run the following codes in CLI and then install packages in the virtual environment.
On the Artemis board side, I burned the sketch ble_arduino.ino provided in the codebase so that the MCU prints its MAC address via serial communication.
The codebase used in this lab to establish the BLE channel consists of two parts: Python codes that runs on PC and Arduino codes that process commands on MCU. It is structured as follows:
On PC end, the configuration file connection.yaml sets the MAC address on the other side and Universally Unique Identifiers (UUID) used in communication. The cmd_type.py file uses an enumeration class to record the map of command types and their associated values. ble.py and base_ble.py define fundamental functions to support BLE. The Jupyter notebook file demo.ipynb runs key commands.
For MCU, besides the file ble_arduino.ino which is run on the board, there are three header files that define BLE characteristics, Extended String and functions related to robot command strings.
The BLE connection between PC and MCU works in a pattern like a community bulletin board, where the peripheral device posts data for all radios in the community to read and the central device acts like a reader to view the information on the bulletin board and get the data it needs.
Before kicking off lab tasks, I first ran the demo codes and obtained the following outputs:
The corresponding output from the MCU:
The first task is to complete the ECHO case operations on the MCU side, which enables it to reply the PC with an augmented message. I programmed the MCU to repeat the received message three times and append a smiley face:
Task 1: Send Echo Command<
The second task is to complete a command GET_TIME_MILLIS where the MCU sends the time in the format of "T:123456".
I added GET_TIME_MILLIS to the enumeration of both PC and Artemis program. I completed the command as:
Task 2: Get Time Command
This task is to set up a notification handler at the PC to extract time from the string sent by Artemis.
In the handler, the received byte array is first decoded to string then sliced, so only the time value is printed:
Task 3: Notification Handler
In this task, another command GET_TEMP_5s is added which sampled temperature in an interval of 1 second for five times and sends the obtained values all at once:
A new notification handler print_recv is setup for this task to demonstrate the results on PC:
Task 4: Get Temperature Command
To sample temperature more rapidly, I use another command GET_TEMP_5s_RAPID in which I set an interval of 100ms, thus 5 * 1000 / 100 = 50 samples were obtained in 5 seconds, stored in two arrays, then sent one at a time to PC:
Task 4: Get Temperature Rapidly
The Artemis board has 384 kB of RAM, assuming that all of it can be used for the sampled data, thus 384*1000*8/16 = 192,000
16-bit data can be stored. If the data is sampled at 150 Hz, only 192,000/150 = 1280
seconds of data can be recorded. In other words, only 1280/5 = 256
flows of "5 seconds of 16-bit values taken at 150 Hz" can be afforded.
To create messages with loads of different sizes, I add a command DATARATE that makes the MCU to send a string of the length specified by PC. For example, if PC runs ble.send_command(CMD.DATARATE,20)
, the MCU will reply with a string of 20 '.'s, which takes 20 bytes.
To measure data rate continuously, I set a notification handler for PC which records time upon receiving a message recv_time
, command Artemis to send a reply with one more byte, record the sending time send_time
, and calculate the data rate by length of message /(recv_time-send_time)
in every round.
By running the command, I obtained the data rate for message with 5 and 120 bytes, and also a plot for data rate in terms of message length from 0 to 150 bytes:
The plot implies that in general data rate increases as the size of message load goes up. Therefore, it is safe to conclude that larger replies help to reduce the portion of overhead and increase effective data rate, and sending messages in many short packets introduces greater overhead.
To count number of messages sent/received on both sides, I ran a command on Artemis that continuously send messages which carries the information of its index, and counts messages received by PC in a handler:
Additional Task: Reliability
As the MCU sends data continuously, the data rate is as high as possible. In the test, the computer catched all the messages sent from Artemis without missing any data, so it can be concluded that transmission is reliable.
This project performs communication between PC and the Artemis board through Bluetooth. Limitations, effective data rate, and reliability concerning such transimission are also discussed.
Time of Flight Sensors
This lab aims to arm the Artemis board with two time-of-flight distance sensors. The range, accuracy, repeatability, and reading time of the sensors are examined.
The sensors use I2C to communicate with the Artemis board. As is specified on VL53L1X's datasheet, the default I2C address is 0x52
.
Two ToF sensors shall be employed on the robot. As these two share the same default I2C address, one of them is supposed to be changed manually. I implemented this by connecting the XSHUT pin of one of the sensors to a GPIO pin on Artemis. In the setup process, after starting both sensors, the pin outputs low to temporarily shut down the corresponding sensor. A new I2C address is assigned to the other sensor programmatically, then the shut down one can be enabled again with its default address.
I plan to place thw two sensors at the front corners on the robot car, as is shown in the picture:
Each sensor is able to detect obstacles on its corresponding side. If they detect an obstacle of the same distance simultaneously, it is probable that there is an obstacle in the front. However, this requires both of the sensors function correctly with high precision. Also, small obstacles in the front could be missed if it is closer to one sensor. More accurate measurement could be achieved by adding more sensors.
Wiring Diagram
Qwiic cables are arranged in colors: black for GND, red for 3.3V, blue for SDA and yellow for SCL.
The two ToF sensors are connected to a QWIIC break-out board then to the Artemis board.
Picture of ToF sensors connected to QWIIC
Screenshot of Artemis Scanning for I2C Device
After being physically connected, the Artemis scanned the I2C channel and found a device with an address of 0x29
, which is different from the value 0x52
claimed in VL53L1X's datasheet. The reason behind this is that I2C transmits MSB first. 0x29
is 101001 in binary and 0x52
is 1010010, the first 7 bits of which are the same. After sending the first 7 digits, a zero is padded at the end to fill 8 bits, making 0x29
turns into 0x52
.
The ToF sensor has three modes. The default Long mode can measure data up to 4 m away, achieving the maximum ranging distance, but is relatively sensitive to ambient light conditions. The Short mode only has a ranging distance of up to 1.3 m, but it also has the best immunity against ambient changes. The Medium mode is in the middle of the other two, with the ability to detect obstacles up to 3 m away and not too sensitive of ambient lights.
The final robot is expected to cope with different surroundings, so good ambient immunity is required. Also, a range of 1.3 meters is large enough for a small car. Thus, I think the Short mode should work well on it.
Experiment Setting
I checked the range, accuracy, repeatability, and ranging time of the ToF sensor. The samples were taken at an interval of 50 mm from 50 mm to 900 mm. By adding some codes to the ReadDistance example sketch, the MCU sent the ranging time for every measure to PC.
Sample Result
Measured Distance VS Truth
Ranging Time VS Distance
It can be seen from the results that for Short mode, distance within the 1.3m range can be measured with a maximum error of about ± 8mm. Measurement beyond the range returns a zero value. The measurement has good repeatability. Ranging time has little to do with the measuring range, and is approximately 50.0 ms.
To use 2 ToF sensors simultaneously, one of them needs to be assigned a new I2C address programmatically while the other one is restarted via the XSHUT pin to maintain the default address:
In the demonstration video, the two sensors work in parallel and generate results separately:
To maximize reading speed, I removed thestopRanging()
and clearInterrupt()
commands and moved the startRanging()
process to setup codes. Sensor data are only printed when new reading is ready, i.e., .checkForDataReady()
returns True.
Sample Result
Although there are several redundant codes (getting time and serial printing), the loop consistently takes 10 to 11 ms, which is significantly reduced from the ranging time of 50 ms.
Within the 10-11 ms, the biggest share of time is still taken by ToF reading operations (~8 ms). Therefore, I suppose the limiting factor to lowering time is I2C communication performed between sensors and Artemis.
A GET_ToF_5s
command is written to send ToF data sampled in 5 s to PC through BLE:
PC uses a notification handler to parse the received message. Then the samples from two sensors are plotted against time stamps:
The sensor used in this lab is based on time-of-flight infrared transmission. Such sensors work by emitting a pulse modulated signal, taking a travel time t until return, and getting the distance d from d = t*c/2
, where c is the speed of time. Pros of ToF IR sensors are high sample rate, small form factor and immunity to changes in colors, textures and ambient lights. But they also have shortcomings as high cost and complicated processing.
Some infrared distance senosrs base on amplitude, comparing the amplitude of the reflected wave to the one of the original emitted wave and calculate distance accordingly. Their pros include low cost and simple circuitry, and they work reasonably well under many circumstances. However, such sensors are extremely sensitive to target reflectivity and perform poorly in high ambient light.
Another type is triangulation–based sensors. To obtain the distance to target, they emit a wave at an angle and measure the reflected wave with an array of diodes, then calculate results with the angle and the distance between the highest-magnitude diode. Their pros are simple circuitry and less sensitivity to color, texture, ambient light. On the other side, they are expensive and bulky, and also do not work in high ambient light and has the problem of low sample rate.
I tested the ToF sensor against a coarse white wall, white paper, yellow book cover, and a piece of black leather. It turned out that there was no notable difference among these materials in terms of reading accuracy and speed. Therefore, the conclusion that the ToF IR sensor is not sensitive to colors and textures is implied.
Experiment Setting
Two ToF sensors are connected to the Artemis board. The performance of the two sensors are discussed under different circumstances. Sensor data are sent to PC via BLE and parsed for future use.
IMU
This lab adds an IMU board to Artemis and tests the motion of robot car.
IMU is connected to Artemis through the QWIIC breakout board:
Artemis IMU Connection
After uploading the IMU example code to Artemis, the readings from the accelerometer, gyroscope and magnetometer on IMU are printed:
In the example code, a constant AD0_VAL
is defined for .begin(WIRE_PORT, AD0_VAL);
. It should be set to 1, which is the default value for the SparkFun 9DoF IMU breakout, but should be 0 if the ADR jumper is closed.
Accelerometers measure acceleration, which can be used to gauge orientation when stationary by the component of gravity pull on x, y and z axes. Gyroscopes measure the angular velocity around the three axes, which can also be used to obtain orientation.
When the IMU is laid flat on the table, the z axis acceleration is around 1000 mg. All other readings from the accelerometer and the gyroscope are approximately zero, for there is no acceleration or movement except for gravity pull.
When I rotated the IMU to a 90-degree tilt, the 1000 mg transferred to y-axis. Gyroscope reading on y-axis arose then returned to zero when the rotation finished.
I added codes to blink the LED on Artemis when IMU is connected and started as a visual indication:
Accelerometer readings can be turned into pitch and roll through mathematical calculations. Pitch is given by pitch_a = atan2(myICM.accX(),myICM.accZ())*180/M_PI
, and roll is given by roll_a = atan2(myICM.accY(),myICM.accZ())*180/M_PI
.
Example outputs for {-90, 0, 90} degrees pitch and roll are:
{-90, 0, 90} Degrees Pitch
{-90, 0, 90} Degrees Roll
Although noise affects the precision of accelerometer results a lot, causing an error of approximately ± 2 degrees, the accuracy is pretty good (readings are basically centered at each true value). Thus, I don't think a two-point calibration is needed in this case.
FFT is used to analyze noise in accelerometer readings. I sampled roll data with a sample rate of 64 for 5 seconds and got 320 points to perform FFT. During the sampling period, I slowly turned IMU around its x-axis.
With the sampled data from Artemis, a python script was run to generate FFT result:
From the frequency domain plot, it can be seen that the major component, which stands for the useful data part, lies in the low frequency area near 0 Hz. There is not any notable spike in the frequency domain plot, probably because there is already a low pass filter selected by default for accelerometer in the IMU chip. However, if I have to take another cut-off frequency to eliminate remaining noise, I will choose 5 Hz, which would preserve useful low-frequency data and filter out most of the noise.
Gyroscope data are collected to calculate pitch, roll and yaw:
Below are some example outputs at different gestures:
Initially, the IMU board is set to be zero in all three orientations. The returned results from gyroscope and accelerometer are both accurate.
After a while, I only changed the yaw to 90 degrees with the board still laid flat on table. The yaw result was accurate, but the pitch and roll result from gyroscope were far from true value.
Then I turned the yaw to -90 degrees. Still, yaw was measured accurately, but pitch and roll given by gyroscope became more far away from the truth.
To conclude, the gyroscope gives accurate yaw data, but has severe drifting issue in terms of pitch and roll.
As accelerometer is susceptible to noise but not drifting, while gyroscope generates smooth drifting results, the two can be combined to cancel each other's drawback with a complementary filter. The filter is given by the formula theta = (theta + gyr_data * dt)*(1 - alpha) + acc_data * alpha
:
After several tests, I chose alpha to be 0.3 to maximize performance. Here's the video demonstrating results from accelerometer only, gyroscope only, and combined, which is close to the accurate accelerometer result but without noise:
After commenting out unnecessary delays and Serial.print
statements with only the one printing calculated loop time, the maximum speed to sample data is 1-2 ms:
Loop Time in ms
To send IMU and ToF data to PC, I added a new command GET_ToF_IMU_5s
which collects IMU and ToF data in arrays for 5s and then sent to PC via bluetooth in the format of time|tof1,tof2|accx,accy,accz|gyrx,gyry,gyrz:
Received data are processed by a notification handler in Python and then plotted:
As an int takes 2 bytes, and a float takes 4 bytes, each such sample will generate 3*2+6*4=30 bytes. One second takes 10 samples, resulting in 300 bytes. With 384 kB of RAM, the Artemis can support 384*1024/300=1310s of such sampling with full memory allocated to the arrays.
The 3.7V 850mAh battery is used to support the motor,as it consumes most of the electricity to drive the car, and Artemis board with peripheral sensors use the 3.7V 650mAh one.
With the remote control, I managed to make the RC car perform stunts like flipping and spinning in place. To do a flip, the motors should first dash in one direction then motors abruptly inverse direction.
Without Artemis
Then I attached Artemis to the car and let it did a spin. Motion data for 5 seconds were recorded and sent to PC.
With Artemis
Corresponding collected data plots:
The ToF data are meaningless as the ToF sensors are not fixed to face proper direction. The abrupt change of GyrZ at 3.5 s indicates the car started to spin. Acceleration oscillations in Y and Z correspond to the bumps while spinning.
In this lab, an IMU is attached to Artemis, the accelerometer and gyroscope readings of which are used to calculate orientation of the robot. Also, battery supported Artemis and robot car are tested.
Motors and Open Loop Control
In this lab, two motor drivers are tested and then used to drive motors of the car. Other components as sensors and batteries are also secured. An untethered open loop control is performed in the end.
The intended connections between the motor drivers, Artemis, and battery are shown as:
I chose 13, A14, A15 and A16 on Artemis as the motor drivers' input because these pins support PWM, which are implied in the schematic plot by a ~ sign.
The Artemis and the motor drivers are powered by different batteries so that they can be debugged separately and isolate the noise generated by the motor end from Artemis. Also, the motors deplete power much faster than the Artemis, thus they need to be power separately from the MCU.
To display the PWM signal, the oscilloscope's probe tip is connected to one of a motor driver's output and the ground clip is attached to the driver's GND:
Oscilloscope Hookup
As for power supply of the driver, I simply attach the 3.7 V 850 mAh battery to it, which is within the operating voltage range of 2.7 V to 10.8 V stated in the motor driver's datasheet.
I first tested the driver with analogWrite(13,63); analogWrite(A14,0);
, which generates a 25% duty cycle PWM signal as 63 is approximately 25% of 255. The corresponding oscilloscope display is:
Then I tested on analogWrite(13,127); analogWrite(A14,0);
, which generates a 50% duty cycle PWM signal as 127 is approximately half of 255. The corresponding oscilloscope display is:
By running the following two codes respectively, I managed to spin the right-side wheels forward and backward.
With battery attached to the motor drivers, I put the car on the floor and let wheels on both sides spin so that the car can move forward:
All the components are secured (shout out to my lab partner Boxin in this task!!):
The lowest PWM values to drive the car to move on the lab's floor and make turns are examined by burning different speed values into Artemis and see if the value works.
To make the car run, the lower limit PWM value I obtained is 25, corresponding to 9.80% duty cycle:
The lower limit to do in-place turns I obtained is 80, corresponding to 31.37% duty cycle:
But these results are susceptible to change in lab settings, e.g. different textures of the floor and smoothness of the tires, etc..
Initially, the car tended to drift towards its left side, so I added a calibration factor 1.25 which gave the left wheels more power so that the car can follow a fairly straight line. However, such static calibration method is susceptible to environment changes, so it is necessary to use PID control in future labs.
By applying the calibration stated above, the car can go a fairly straight line for over 6 feet in the video (each vinyl tile is 1 feet in length):
I wrap up the basic movements into functions to minimize repeat in the program:
For the untethered open loop control, the car is programmed to move forth and back and make a clockwise turn repeatedly with the following codes:
As measured in previous labs, the ToF sensor reading takes ~8 ms, corresponding to 125 Hz
, and IMU reading takes ~2, corresponding to 500 Hz
.
Using the codes above, I found that the analogWrite can be performed within 1.5 ms, thus won't be a bottleneck to increase speed. With loads on, I tested the PWM signal in 50% duty cycle and got a frequency of ~180 Hz
. This frequency is fast enough to drive a inertial device like the motors. It is also adequate to satisfy the fastest ToF based control, but is lower than the maximum frequency of IMU reading. Therefore, generating a faster PWM signal by manually configuring timers can help to follow the faster IMU data based control. But there's no free lunch: at higher PWM frequencies, the motor's coils can't extract as much as energy from the pulses thus require higher voltage to spin.
As static friction is greater than kinetic friction, it takes higher voltage to start the car from still than to keep it moving while in motion. To explore the lower limit PWM value in motion, I set the initial speed as 26 to start the car, then deduct 1 from the value every 3 seconds to see which value stops the car.
As demonstrated in the video, the car stopped after 6 seconds from the time it started to move, which corresponded to PWM value of 24. So the lowest limit is 24 (9.4% duty cycle) to keep the car running in motion, which is lower than the limit to start (25).
This lab discusses PWM signals and explores the lower bound to drive the car to start, turn and keep moving in motion. All the components to employ are assembled and secured in the car. A simple open loop control is also performed.
Closed-loop control (PID)
Lab 6 is about PID control, where a position based PID is performed. A debugging system that logs experiment data is also constructed in this lab.
To log experiment data, the distance value, PID result and motor input are stored along with time stamps while operating. These data are then sent to PC via bluetooth after PID control is completed, as bluetooth transferring generates significant delay thus should not be performed in PID loop.
For data storage, I chose to initialize four fixed length arrays:
Each array takes 2*2000=4000 Bytes
and they occupy 16 kB
in all, which is far from the RAM size 384 kB of the Artemis. Also, one PID cycle takes ~7 ms
, so the arrays can store data for around 7ms*2000=14 s
, enough for the PID control to be done.
The program follows the BLE program used in Lab 4, where the main loop waits for a BLE command. PID control and corresponding logging data are invoked by a PID_POSITION
command. First, kp and ki values are parsed from the Bluetooth command, followed by initialization of ToF sensor and parameters. After that, the PID control begins with data logged. Finally, stored data are sent through Bluetooth in the format of time|tof|motor|pid.
On PC end, command is sent as ble.send_command(CMD.PID_POSITION, "kp|ki|kd")
and received data is parsed with a notification handler:
I chose to complete Task A: Position Control. The car is expected to drive towards a wall then stop at a distance of exactly 1ft away from the wall using feedback from the time of flight sensor.
PID stands for, and its formula is given by correction = Kp*error + Ki*errorIntegral) + Kd*errorDerivative)
.
Proportional gain Kp serves to correct current error (the difference between target and current value) in a proportional way. A higher Kp yields a stronger control action but can also incur instability or overshoot.
Integral gain Ki helps to reduce past accumulated errors. It adds more steering action if the error persists for a long period of time, but can also cause overshoot or weaving if set too high as the integrated errors in memory takes time to mitigate.
Derivative gain Kd tries to correct future errors based on the rate of change of the error. It acts as a damping factor that reduces overshoot and limits the speed of the steering response at the risk of amplifying noise.
At first, I included a while (!distanceSensor1.checkForDataReady()) delay(1)
in the PID loop, but it turned out that each cycle took 90-100 ms on average. To obtain faster loop times, I commented out the codes waiting for sensor ready. As a result, the final PID loop achieves 8-9 ms per cycle, of which the biggest share comes from the TOF sensor reading that takes ~7 ms.
The PID result calculation is wrapped in a function, which takes kp, ki, kd, target and current values as input, calculate the PID value, and performs an LPF on derivative term to avoid derivative kick and wind-up protection on integral term:
The PID control loop operates as the following codes, and ends either if the current error is below 1 mm or if the arrays to log data are full.
The PID value is mapped to a PWM value range from 32 to 200. Starting from 32 skips the deadband at which the wheels do not move.
For tuning, I decided to follow the Heuristic procedure #2, so I started with kp=1.0, kd=0, ki=0. Oscillation is observed with this set of parameters and the corresponding TOF reading & PID value and motor input VS time plots are:
kp=1.0, ki=0, kd=0
Next, I decreased kp by a factor of 4 and added the integral term. As a result of the adjustment, oscillation is mitigated but overshoot arises. I also observed that with this set of parameters, the car tends to crash into the wall due to the steering action out of the integral term.
kp=0.25, ki=0.1, kd=0
kp=0.25, ki=0.2, kd=0
To regulate the steering action incurred by integral term, I increased kd and tested on different sets of values.
kp=0.25, ki=0.2, kd=0.8
kp=0.25, ki=0.1, kd=0.8
kp=0.25, ki=0.1, kd=1.0
kp=0.25, ki=0.05, kd=1.0
The last two sets work pretty well, where convergence can be achieved with only a few oscillations and overshoot is controlled not to lead the car to bump into the wall.
Below are three successful runs with kp=0.25, ki=0.1, kd=1.0 under different conditions:
The maximum linear speed achieved is around 0.8 m/s
.
Windup is an issue on the integral term that gives rise to overshoot because the accumulated integral component goes beyond the saturation limits of final control signal. To address this problem, I employ two techniques: setting bounds and using Clegg Integrator.
The Clegg Integrator empties integral value every time the error crosses zero. This eliminates accumulated error in the opposite direction to limit windup.
Without & With Clegg Integrator
This lab performs a position PID control through a heuristic process. Besides tuning parameters of PID terms, solutions to eliminate derivative kicks and integral windup are also discussed.
Kalman Filter
The objective of Lab 7 is to implement a Kalman filter, which is aimed to speed up the control loop obtained from PID in Lab 6.
Before implementing KF, the drag and momentum terms are estimated using a step response in order to calculate A and B matrices. I chose the step size to be 88
, which is the maximum PWM value generated in Lab 6's PID control. I ran the robot accordingly and obtained the following data:
As the plots imply, the robot reaches a steady state at around 4.3 s
, where the steady state speed is approximately 1907 mm/s
. The 90% rise time is about 1.8 s
. These gives x_dot = 1907
and t_90 = 1.8
. Drag and momentum are calculated as follows:
The following picture from lecture slides explains how the Kalman Filter works. Several parameters are needed to be specified in the work flow: matrices A
, B
, C
to generate state space equation, and noise matrices sigma_u
and sigma_z
.
Kalman Filter Workflow from Lecture Slides
To begin with, A and B are calculated and then discretized using the sampling interval delta_T
which I set to be 0.1
, considering that the sampling time TOF sensor takes to wait for data ready is around 100 ms.
C matrix is [-1,0]
as the TOF sensor reads the negative distance from the wall at state 0. The state vector x is initialized as the negative of initial TOF data.
Then noise covariance matrices are specified. Process noise sigma_u
is dependent on sampling rate. The two standard deviations sigma1
and sigma2
reflects the trust in modeled position and speed respectively, and a diagonal matrix is used because we assume they are uncorrelated.
For sigma_z
which represents measurement noise. I observed that when the robot is stationary, the measurement error is about ±5 mm
. Considering that movement of the robot may add to more instability, I took four times of the value, i.e., sigma3 = 20
.
At this time, I only gave a rough estimation about the ballpark numbers for the variance and would fine-tune it by trials and errors when implementing KF, as their accuracy visibly determines the result of the filter: too big values make KF not respond, and too small values yield bad performance.
I first implemented Kalman Filter in Jupyter Notebook with a set of time-stamped ToFand PWM data recorded from Lab 6.
The KF calculation iterations are wrapped up in a function:
Initial state mean and covariance are specified as (Again, the values of the state covariance matrix are only an initial estimation and will be tuned through experiments):
Then, the pwm values are scaled into u by dividing the step size 88:
The KF calculation are iterated in the following loop with results recorded:
The initial trial had terrible performance, where KF estimation is far off the original data:
Initial sig=[[5**2,0],[0,5**2]], sigma_u=[[1000,0],[0,1000]], sigma_z=[400]
As the initial state is way lower than the ground truth and is rather off the trend of the curve, I tuned the initial state covariance to a higher value and the result was improved in turn:
Initial sig=[[30**2,0],[0,30**2]], sigma_u=[[1000,0],[0,1000]], sigma_z=[400]
The KF response was weak at some points, so I increased the process noise variances:
Initial sig=[[30**2,0],[0,30**2]], sigma_u=[[4000,0],[0,4000]], sigma_z=[400]
The KF curve was rather close to the ground truth, but still slightly off. I tried to lowere the value of measurement noise variance and obtained a good result at sigma3 = 5
:
Initial sig=[[30**2,0],[0,30**2]], sigma_u=[[4000,0],[0,4000]], sigma_z=[25]
Implementing KF on Artemis is just transplanting the KF codes from Jupyter to the Artemis (translating to C++ of course) and incorporating KF iteration into PID loop:
In the PID loop, the error is updated by KF distance result instead of raw TOF reading:
By running yhe updated codes with Kalman filter, I obtained the following results:
The KF curve fits the raw data quite well, only the initial point are rather off. Also, implied by the first plot, we can see that the PID value is calculated according to the KF estimation.
As demonstrated in the video, there's not any significant improvement compared to the PID-only counterpart, though. This is because I haven't really sped up the control loop as the wait-for-tof-data-ready delay hasn't been removed yet.
This lab computes a Kalman filter, whose parameters are basically estimated from the logged data in the previous lab and then fine-tuned through trials and errors. The Kalman filter is first implemented in Jupyter notebook for a sanity check and finally integrated into the PID control on the robot.
Stunts!
This lab aims to perform a controlled stunt. The one I chose to complete is based on position control, where the robot is expected to perform a flip on a mat with a center located 0.5m from the wall and drive back as fast as possible.
Though the lab is based on results from the position control in Lab 6, the PID control is removed to ensure that the robot runs at full speed towards the wall then backwards at full speed to be able to do the flip. Without PID, I had to stop the robot at a longer distance than 500 mm
to save some room for drifting.
To test on this distance more easily, the stunt instruction is sent over a bluetooth command with specified distance value: ble.send_command(CMD.FLIP, "1200")
. (1200 is the final value a used in the successful demo.)
I also tried to employ Kalman Filter for faster control, where the parameters were taken from Lab 7 but u was scaled by 255 as the robot moves at full speed. However, it turned out that the performance was not improved much compared to merely setting a longer distance, so I just removed KF in the following implementation.
Kalman Filter Results
In the corresponding command handler at Artemis, the robot is programmed to do the flip after reaching the specified distance by driving backward immediately at full speed.
Initialization
Control Loop
Below are three successful runs:
First Success (you can tell how excited I was by the shaking camera ;))
Second Success
Like the second run shows, I found that sometimes the wheels on the left side responded faster at the flipping point than the two on the right side even if calibration factor was applied, which may cause the flip to fail. To compensate for the difference, I applied the following code before making the wheels spin backward:
Third Success
The last run took 2.8 s
for the car to go back.
Below are the corresponding PWM and TOF reading graphs. In fact, the TOF data are only valid before reaching 500 mm
, as the robot started to flip after that and the readings were no longer available.
Mapping
This lab generates a map of obstacles and walls in a static room by spinning the robot on axis while taking TOF sensor reading.
To guarantee reliability of distance reading, the robot is expected to turn at a low speed. PID control on angular speed is used to make the turning speed constant. Only the proportional component was included, which turned out to work well. The control is integrated along with IMU and TOF reading in the turning loop:
For debugging simplicity, parameters as Kp, target turning angle and speed, and benchmark PWM value are set with the bluetooth command ble.send_command(CMD.TURN, "kp|targetAngle|benchmark|targetSpeed")
. Below is a successful control with kp = 1
, targetTurningAngle = 360 degree
, benchmarkPWM = 90
, and targetAngularSpeed = 25 degree/sec
, where kp is tuned by experiments.
As each control loop takes ~100 ms
, approximately 360/25/0.1 = 144
readings can be collected per run.
As the video shows, the robot can drift slightly while turning, causing a maximum error of ~50 mm
. However, considering the randomness of drifting, I decided to take the average of 3 measurement at each point to minimize such error. Another error comes from the fact that position where the TOF sensor is placed is not exactly the pivot point of the turning, which gives rise to an error of ~30 mm
. This can be compensated by subtracting this value from collected distances.
At point (-3,-2), 3 runs of TOF scans are taken. Corresponding polar plots are:
Averaging these three, the final result is:
Polar Scan at (-3,-2)
Same steps on other 3 measure points:
Polar Scan at (0,3)
Polar Scan at (5,-3)
Polar Scan at (5,3)
To convert the measurements into the actual inertial reference frame, cartesian distance is first calculated from angle and TOF reading by cos(angle + offset) * distance
in x and sin(angle + offset) * distance
, where distance is formatted to the number of tiles by TOF reading / 333
, and offset is determined by the starting orientation of the robot.
Converting the above calculation to a matrix:
Below is the converted plot:
A line-based map is estimated from the scatter plot to later be used in the simulator:
Localization (Sim)
This lab implements a grid localization using Bayes Filter in a simulated environment.
Bayes Filter involves two main steps in each iteration: a prediction step to account for control input data (movement) and an update step to account for observation data (measurement). During the prediction step, uncertainty in the belief is typically increased, while the update step helps to decrease uncertainty.
Lecture Notes
I referred to Linda Li's codes for Python implementation of the algorithm.
compute_control
is a function that generates a value for u
(i.e., delta_rot1
, delta_trans
, and delta_rot2
) with both the current and previous positions. The calculations are done as the following equations:
Translating them into codes:
The odom_motion_model
function calculates the probability that the robot moved from a previous position cur_pose
to a current position prev_pose
given a specific control input u
. The compute_control function is first run to determine the control action needed to move from prev_pose
to curr_pose
. Then noise is added using the Gaussian distribution to estimate the probability that the robot successfully completed the transition using the calculated control action. The probability is the product of the Gaussian function applied to all three parameters:
The prediction_step
function updates the probability bel_bar
by calculating actual control input u
from the current and previous positions of the robot cur_odom
and prev_odom
.If the belief value for any previous data point is less than 0.0001, it is considered negligible and can be disregarded. However, if the belief value is greater than this threshold, the code proceeds to iterate through all current data points and updates bel_bar
:
Since this belief represents a probability distribution, it is necessary to normalize loc.bel
at the end to ensure that all values add up to 1.
The sensor_model
function receives a pre-calculated array of the true observations for a specific pose in the grid. It evaluates the probability of each sensor measurement based on the observations recorded by the robot's sensors after its latest movement using the Gaussian function with the true observation as the mean and the sensor_sigma as the standard deviation, which is the noise parameter for the sensor model. An array containing the calculated results for all 18 measurements is returned. Essentially, this function determines the probability that the sensor observations match the true observations for a given pose in the grid:
For the update step, the potential current poses of the robot are iterated through using three nested "for" loops. An intermediate belief value (loc.bel_bar
, updated during the prediction step) is calculated for each potential pose and multiplied by the probability that the current sensor readings are accurate for that specific pose. The belief value is updated for each pose accordingly, and again normalization is performed as in the prediction step:
Below is the Bayes filter simulation on a robot moving in a preplanned trajectory. The odometry (sensor) values in red, ground truth of the robot's position in green, and belief in blue are plotted along with the values of loc.bel_bar at each grid, where the lighter shade of the cell indicates a higher belief value:
the prediction step has a high degree of accuracy, as the cells with high probability are basically close (within 1-2 grid's error) to the robot's actual location.
Below is the logged output for each step:
Localization (Real)
In this lab, the localization with Bayes filter simulated in Lab 10 is performed on the actual robot.
The provided notebook lab11_sim.ipynb
is first run to test the localization functions in simulation. Here is the resultant plot with odom, ground truth and belief:
As the plot suggests, the belief (blue) and the ground truth (green) are basically close to each other.
The perform_observation_loop
function sends a BLE command to the robot which makes it to turn degrees in place and collect 18 equidistant (in terms of the angular space) TOF readings. Angular data and TOF readings are used as sensor bearings and sensor ranges in the Localization module respectively.
To wait for the robot to collect data, asyncio
is used to generate a 40 sec delay. I tried async function definition and await keywords, but some errors occurred probably due to absence of required components for asyncio in the environment. Thus, I just call the asyncio sleep coroutine directly:
On Artemis end, I reused the codes in Lab 9 to perform the OBSERVATION command with minor changes: removing PID control and stopping the robot at every 20 degree increment to collect data:
Considering such open-loop continuous turning control may cause error to accumulate over time. I tested the program on the robot, where it was made to make the turn at a low speed with PWM value 90
.
The robot turned 340 degrees nearly in place to collect the 18 TOF samples. The corresponding angular data are [0.0, 20.17, 40.249, 60.275, 80.31, 100.431, 120.582, 140.609, 160.654, 180.693, 200.801, 220.824, 240.857, 260.896, 280.908, 301.1, 321.383, 341.428]
, where the eventual accumulated error is 1.428
. I suppose the error value is acceptable, so I just adopted this open loop control.
Then I placed the robot at the four marked poses in the lab and ran the update step of the Bayes filter once at each of them. To demonstrate the results, I present the polar plot for collected data, logs for the update step that include the belief with probability, and the plot on map (belief in blue and ground truth in green) at each point. The logs and plotting are all in meters.
From the results above, we can see that the localization for position worked pretty well except for the last one, where the belief and ground truth points perfectly overlap at all other three points. The last one's belief is not far off, though, with only one grid's error. It is also noticeable that there exists an error of ± 10 degrees
for orientation belief. Plus, the probabilities of all the beliefs are all extremely close to 1.0
.
One possible reason that the localization for (5 ft, 3 ft, 0 deg)
was less successful than the others was that the surroundings at that location were quite symmetrical, which can be implied by the polar plot. This symmetry may cause confusion for estimation, as the two locations share similar TOF readings. For the error regarding orientation, I think it was probably caused by drifting in robot turning and noise in TOF data collection.
In summary, the lab results can be considered successful, as the estimates are basically close to the actual pose of the robot with high probabilities in the beliefs.
Path Planning and Execution
The final lab is to have the robot navigate through a sequence of waypoints in the same map as in Lab 9-11.
I collaborate with Zhiyuan Zhang in this lab. We together figured out the solution scheme. I developed the orientation and PID codes on Artemis, and the complete program was run on Zhiyuan's robot.
To ensure that the robot stops exactly at the correct position/pose, we decided to use PID control for both straight line movements and in-place rotation. The position control is dependent on the change in ToF values, and orientation control is based on the gyroscope readings.
For debugging convenience, the PID parameters and target are passed to the robot via bluetooth. Below are two PID demos:
CMD.PID_POSITION, "1.0|0|0.1|390"
CMD.PID_ORIENTATION, "1.0|0|0.1|180.0"
As PID position control depends on the distance data from the TOF sensor at the front of the robot, we decided to let the robot always travel perpendicularly towards walls. Therefore, two extra waypoints (yellow) are added:
Modified Route
In order to determine PID target values, I put the robot at each of the waypoints to obtain distance measurements. I found that the sensor tends to be more accurate for smaller distances, and become less stable when the distance is above 2000 mm
. Therefore, we made the robot move backwards and face the closer wall when traveling from (-4, -1)
to (1,-1)
, which improved accuracy of the distance reading.
The graph below demonstrates the designed robot movements, where yellow points are the added waypoints, orange lines show how the robot's position and orientation evolve along the way, and blue symbols represent the TOF sensor's direction when each in-place turning finishes/moving starts:
Robot Movements
During our first trials, it was always difficult for the robot to get to the final (0, 0)
, as tiny errors in either moving towards or turning at the previous point (0, 3)
would made the robot not facing the obstacle (highlighted in red) perpendicularly, thus read the distance from the surrounding walls. Therefore, we changed the route to make the robot turn clockwise at (0, 3)
and read the wall behind instead then go backward from (0, 3)
to (0, 0)
. This significantly increased fault tolerance:
Final Plan
As is shown in the PID section, the positional and orientational PID control are wrapped up to two separate BLE commands with customizable target values. Computer takes charge of all the command sending by translating the designed movements into codes:
After several trials, we achieved a successful run where almost every waypoints were hit and the robot stopped close to (0,0)
:
Although the robot was slightly off at (2, -3)
(but still hit the point after rotation), the error did not accumulate to the next point, as the algorithm is based on the absolute distance to the wall. It was also slightly off at the last two points, but all the errors were within one grid. These errors may be caused by inaccuracy in TOF readings and failure to make rotation exactly in-place (because the motos are so dependent on the battery level).
We didn't use localization to validate our solution, as the localization program developed in Lab 11 didn't include PID control to guarantee an accurate stopping pose. However, the video can clearly demonstrate the performance of our result.
The final lab of this course concludes with a satisfactory outcome: the robot successfully navigated through all the waypoints! I am so grateful to the exceptional staff for your dedication and support throughout the entire semester. Thank you! :)