top of page

Automation of the Picky Eater

Updated: Sep 28, 2024


ree

By Lucy C.


“I have watched enough science fiction films to accept the fact that humanity’s unchecked pursuit of learning will end with robots taking over the world.”


—Sarah Vowell


While we cannot know for certain whether or not robots will take over the world, we all have to agree that advances in automation have allowed robots to do countless things without human helpers. Odometry, sensors, and object detection software have given robots the autonomy to do countless things independently. Even high school robotics teams like CAOS Robotics have been able to use these tools to improve the automation of our robots. More notably, we have taken advantage of this indispensable technology and used it to design our robot for FIRST’s 2022-2023 Tech Challenge: the Picky Eater.


About the Picky Eater


Designed for the 2022-2023 season, the Picky Eater was tasked with intaking cones and plunging them down plastic poles called junctions; the challenge essentially resembled a crane game. While the process of intaking and depositing cones was primarily teleoperated, our robot needed some degree of automation in order to make this task easier, as well as give our team a competitive edge when it came to being scored for our autonomous period: the part of the competition where our robot had to autonomously move and park along certain areas of the field.


Below details the innovative ways CAOS Robotics has implemented motion planning and object detection tools in the Picky Eater’s automation.


1. Odometry

ree

Odometry is a technique in robotics where motion data is collected in order to help a robot measure its distance, location, and position. This is useful, as it can help robots become aware of their surroundings and use this information to facilitate decisions based on where they are and how much they have traveled.


We found odometry especially important to enhancing our robot’s autonomous performance, having had past experiences where our robot remained dormant during our autonomous period.


Installed at the base of our robot, our odometry system consisted of three wheels, which were embedded with technology that helped our robot compute its distance and location. The first wheel solely rotated, thereby providing motion data for the odometry. The second wheel was encrypted with code used to calculate the distance the robot was traveling. The third wheel provided auxiliary support to the system, increasing its stability and precision.


In order to help our odometry system calculate distance, we had to do a little bit of arithmetic to approximate the amount of inches the robot would travel for every revolution of the odometry wheels. We knew that since the wheels had radii of one inch, we simply had to find the circumference in order to know what one full revolution was equivalent to:


C=2*1*𝝿

C=2𝝿≈6.3 inches per revolution


We incorporated this calculation into our code, so our odometry could add up the distance.


To code our odometry system, our programmers utilized the motion planning library Road Runner. Designed specifically for FTC, Road Runner is a library dedicated to helping FIRST participants perfect autonomous performance in their robots. This was why we found it ideal to implement in our odometry wheels.


The utilization of odometry with the help of Road Runner was especially feasible in comparison to what we had been previously doing for our autonomous period; before switching to Road Runner and odometry, we were using a time-based system for our autonomous program. We assumed that the robot traveled at a constant rate, believing that time would provide consistent information about distance and location. However, we realized that this system was not very precise, as we could not simply assume that the robot would move at such a consistent rate. Hence, the motion data gathered by the odometry wheels was more accurate.


2. Sensors

ree

In addition to utilizing odometry to enhance the Picky Eater’s automation, we also installed color and distance sensors to optimize the way our robot detected and deposited cones.


We first installed a color sensor on the front of our robot; this was used to detect cones based on their color. We had to modify and tinker with the RGB values of the sensor in order to help the robot detect the color of the cones, as opposed to the color of other surrounding objects. This data gave us insight into the robot’s object detection mechanisms as well as help us understand when there is a cone directly in front of the robot.


We also had a distance sensor on top of the robot, which, as implied by the name, measured the distance between our intake and the cone. The information this sensor provided was extremely useful, as it helped us determine from a distance if we can intake a cone.


Finally, we added a third sensor, another distance sensor, to the side of the robot. While the initial distance sensor we already had was optimal in helping us determine if we can intake a cone, we noticed that the sensor by itself sometimes detected junctions as cones. This was why we added this third sensor to help the robot differentiate junctions.


3. Object Detection

ree

Finally, our team experimented with a variety of different object detection softwares to help our robot detect the signal sleeve—a cone that signals areas where a robot should park. The “signals” would be represented by images pasted on the cone, and they would need to be detected by the robot in order for it to receive the “signal.” This was why choosing a feasible object detection algorithm was so important.


We initially used images of a lightbulb, an atom, and a battery for each signal sleeve (one per sleeve), as they were thematically connected to energy. We attempted to use Google’s Teachable Machine, hoping that it would easily produce an object detection model that would distinguish these objects based on this common theme. However, we noticed that the machine produced an Image Classification Model, not an Object Detection one. This was not compatible with our purposes. While the categorization of an image can be useful in some aspects of automation, we knew it would be more helpful if our robot had the ability to unambiguously identify images when encountered with a signal sleeve.


Unfortunately, we struggled to use FTC’s Machine Learning Tool as well. A variety of factors, such as poor and insufficient data quality, contributed to our robot being unable to detect the energy-themed images attached to the sleeves.


We then switched to a more compatible object detection software: TensorFlow.


Because TensorFlow’s Machine Learning approach involves the scanning of an image’s surface to detect objects, we decided to switch our images to everyday objects and more consistently patterned pictures to optimize TensorFlow’s object detection mechanism. The images we used for each signal sleeve included a giraffe, a pair of scissors, and a fork.


In comparison to the progress we made with the aforementioned algorithms, TensorFlow was relatively decent. The Picky Eater actually did quite well with detecting an image when a signal sleeve was in close proximity to it. However, we noticed that the TensorFlow mechanism inadvertently detected other surrounding objects that were not pasted on the signal sleeve; this was why it was not ideal for detection at further distances.


Ultimately, this led us to use April Tag Object Detection. We pasted different April Tags onto each of the signal sleeves and installed a USB camera on top of the robot so that it could scan these tags. This means that once the tags are scanned, the robot would be provided with information about the tag’s position and orientation relative to the camera, and it would “know” where to park.


The utilization of these tags thereby provided less ambiguous and more distinguishable “signals” for the sleeves; surely, an April Tag is far less likely to get mixed up with random everyday objects!


What to See in the Future

ree

Throughout our 2022-2023 season in FTC, we have gathered abundant knowledge about odometry, sensors, and object detection.


But we will not stop there!


As a high school robotics team, we will continue expanding the bounds of our knowledge, so we can implement innovative strategies for our autonomous programming in the future.


Although the 2023-2024 season has not yet started, we have used the extra time off-season to hone our skills and gain technical knowledge about software development and object detection.


Before our school year ended, we seized the opportunity to experiment with odometry and learn about the kinematics behind autonomous programming. For instance, our team captain tried refining the parabolas and graphical curves associated with the robot’s motion planning so that it can compute its position more precisely and facilitate necessary movements.


We were also able to tinker with and experiment on our odometry. That way, we could test the many ideas we have about odometry installation and use the knowledge we gain about it in the future: essentially, use conducive ideas to be more efficient and avoid unhelpful ideas to not waste time while on-season.


More importantly, we have been considering testing another object detection algorithm: YOLO. Previously, we have used algorithms that isolate location and classification in a two-stage process. YOLO, however, classifies and localizes objects by using a single neural network, thereby making it a one-stage process (hence the word play on the acronym You Only Look Once). This is why we believe it might be more efficient in FTC.


We still have yet to research and test YOLO’s capabilities, but this algorithm is most certainly being taken into consideration.



In conclusion, automation was an indispensable part of the Picky Eater’s design. Odometry, sensors, and object identification were some crucial aspects the robot’s automation revolved around. By reflecting on these aspects during our meetings, CAOS Robotics has been learning more about motion planning and object detection. We hope to implement our ever-growing pathways of knowledge about automation and autonomous programming in the future.


Recent Posts

See All

Comments


bottom of page