top of page

Gnorp: An Overview

Updated: Oct 12, 2024

ree

Gnorp, pictured above


By Lucy C.


Our website is nearly 2 years old. During the first year this website was released, we had been designing the Picky Eater, a robot tasked with intaking and plunging cones down plastic poles called junctions. What about this year?


During this year's FTC season, we focused on designing Gnorp, a robot engineered to compete in the 2023-2024 Tech Challenge.


Overview of This Season's Tech Challenge

As illustrated in the following blog post, this year's STEAM-themed Tech Challenge encouraged competitors to fuse art and engineering in the design of their robots. This challenge was divided into three main tasks:


1. Robots must be able to intake colorful hexagonal tiles called "pixels" and deposit them onto a black backdrop to create mosaics. To qualify as "mosaics," patterns of exactly three touching or conjoined pixels must be created on this backdrop.


2. Teams in this challenge must also create origami airplanes called "drones," which the robot must launch over a pair of trusses placed in the middle of the field. The amount of points the launches earn depend on how far the drone is from the area from which it was launched. The further the better.


3. Competitors must be able to engineer the robot so that it can grab, lift, and suspend itself from one of the riggings on the trusses. Accomplishing this feat can give teams a bonus 20 points.


Anatomy of Our Hardware: An Overview


The hardware of our robot can be divided into four main purposes: basic maneuvers, intake, outtake and launching.


Basic maneuvers

Though seemingly simple, the hardware utilized to allow our robot to perform basic maneuvers was the whole reason why we were able to enter a functioning, qualifying robot into this year's competitions.

ree

Intake

We designed a claw mechanism to help our robot intake pixels to paint mosaics.

ree

Outtake:

While our intake used a claw mechanism, our outtake contained a simple yet effective insert mechanism that allowed the seamless depositing of pixels onto the backdrop.

ree

Launching

With the aid of computer aided design, we were able to model and plan the 3D printing of a drone launching mechanism, pictured below:

ree


Our Vision: An Overview

Arguably one of the most integral components of our software, our camera and our computer vision was essential to both our autonomous and teleoperated performance.

ree

Encased in a mount we 3D-printed ourselves, our camera was positioned adjacent to the outtake. Given the fact that our outtake was to be pointed to the backdrop to release the pixels, this mounting positioning enabled our robot to gather vision data of our backdrop so that it may correct its position accordingly for the easier depositing of pixels.


Additionally, with the help of EasyOpenCV, a computer vision library designed for FTC robots, Gnorp could automatically detect the position of our 3D-printed team element. We calibrated our color thresholding to help differentiate our red team element from other surrounding objects. We also designed our profiling to ignore luminosity so that the robot could detect the element, regardless of how well-lit our environment was. We then programmed our robot to detect contours in our environment. That way, the robot could make sense of the space it was in. With these contours, the robot could then set a "horizon line," an imaginary line that separated our playing field from the background. This prevented Gnorp from confusing the color of the background from the color of our element. Using this information, the robot was able to better detect the position of our team element. This detection was important to earning bonus points during our autonomous period. If a robot could autonomously place a purple pixel on a spike mark (marked in colored tape), it would earn ten points. However, if this robot could autonomously place a purple pixel on a spike mark where there was a team prop or element, it would earn 20 points! This was why successful computer vision was important!


Like last year's competition, our robot was also able to detect April Tags with the help of our camera and our software development. Since April Tags were taped onto backdrops, April Tag detection helped make up for heading error during our teleop. In other words, in case our robot began making slight swerves in the wrong direction due to human driver error or inclement field conditions, it could use the position of the April Tag to automatically make slight adjustments to its position. That way, it can maintain a straighter and more streamlined toward the backdrop.


Other Control Milestones: An Overview

Besides computer vision, other key milestones we made in our software design are as follows:

  • We installed various sensors to make our teleoperated period easier. This included 1) touch sensors on the lift mechanism to ensure motors were functioning during lift and suspension and 2) distance sensors on the claw to help it automatically grasp pixels in close proximity.

  • We created a custom collision avoidance algorithm from Bezier splines. Basically with online software, we modelled objects and places on the field we did not want to collide with or run into. We then designed a spline, a type of graphical curve defined by the positions of multiple discrete coordinates, for our robot to follow to avoid these collisions while autonomous. Customizing our own spline enabled greater precision and control over this algorithm.

  • Instead of using Roadrunner like we did for last year's pathfinding, we designed a custom control scheme. This allowed greater flexibility so that we could interface our pathfinding with the avoidance algorithm we had already designed. We could also adjust the scheme accordingly to account for error correction in path finding.

  • We also designed a pre-programmed pathfinding mechanism that can be activated by a human driver with the push of a button on the controlling device. After a human driver pushed this button, the robot could automatically position itself in front of the backdrop for the easier outtake of pixels.



Gnorp, a robot designed for FIRST's STEAM-themed Tech Challenge, consisted of a dual claw to intake pixels, a dual outtake to "paint" pixels, a 3D printed drone launcher, and various hardware components such as odometry, mecanum wheels, and a linear actuator mechanism to ensure basic maneuvers and success upon hanging on one of the trusses' riggings. In addition, our team installed a camera with effective computer vision that could detect April Tags and our team element. We also installed sensors and created custom algorithms to ensure collision avoidance and more precision in our robot's pathfinding and overall function.


Designing Gnorp was indeed a STEAM-themed challenge: we needed creativity and flexibility to paint mosaics and customize our pathfinding, drones, and aesthetic. Meanwhile, we needed technical expertise to ensure our robot could function, maintain balance during suspension, and operate both autonomously and during teleoperation. This challenge greatly altered our perspectives on software, engineering, and the role of art and creativity in the STEM industry.

Comments


bottom of page