The Value of Drive Mode in AIM: Laying a Foundation for Coding Success
In the journey of learning robotics and coding, often driving the robot and coding the robot are considered two separate activities - we’re either driving OR coding. However, driving and coding a robot are interconnected. Understanding how the robot moves while driving supports coding, and understanding coding supports greater engagement with driving. The VEX AIM Coding Robot’s built-in Drive mode is more than just a simple control mechanism—it provides a hands-on way for students to understand movement, explore sensors (like the AI Vision Sensor), and develop essential problem-solving skills. Students can drive the robot using the One Stick Controller to complete tasks and interact with objects. By taking the time to engage with the Drive mode, students gain insights that will serve as a foundation for their coding journey.
Why Driving Matters in the Learning Process
At first glance, driving a robot using a controller might seem like a basic exercise, but in reality, it is a key component of robotics and coding learning. Driving to complete a task allows students to develop an understanding of:
- Movement and physics: Students experience how the robot moves in response to input, helping them visualize key concepts such as speed, direction, and turning radius.
- Control and responsiveness: Through hands-on driving, students learn how subtle changes in control inputs affect movement, which directly connects to programming motor functions later.
- Strategic thinking: Driving allows students to explore different ways to navigate a space, interact with objects, and plan paths—skills that are essential when transitioning to autonomous coding challenges.
By interacting with Drive mode first, students can build a mental model of how the robot responds to commands, making it easier to translate these observations into code when they begin programming their robot to drive autonomously.
Features of AIM Drive Mode
The built-in Drive mode includes several key functionalities that support learning:
- Joystick-Based Movement: Students control forward, reverse, and rotational movement using joystick axes, mimicking real-world robotics control systems.
- AI-Assisted Actions: The built-in AI Vision Sensor can detect and interact with objects like balls and barrels, giving students a glimpse into how AI-powered decision-making works.
- Automated Reactions: There are also included functions for detecting crashes, reacting to sensor inputs, and using visual indicators like emojis and LEDs to communicate the robot’s state.
- Adaptive Strategies: Students can experiment with different strategies using side-to-side movement, automated object collection, and strategic positioning—all skills that will later translate into algorithm development.
Picture this scenario: The teacher introduces students to an activity to move barrels to different AprilTag locations with the robot. Before they begin to code, students are going to work in their groups to first drive the robot to complete the task. Watching one group, the teacher observes that as one student drives the robot with the controller narrating the movement, their partner documents the path in a notebook. Once they’ve completed the task, they review the notes together, and the teacher asks questions about their plan. They then move to a VEXcode project and begin to code the first couple of movements from their plan. The students run the project to test it, observing the movement of the robot, and comparing it to the driven strategy.
From Driving to Coding: Making the Transition
As students move from manually controlling the robot in Drive mode to programming autonomous behaviors, they engage in a process of sense-making that aligns with research on learning and cognition. Shifting between physical modeling, like driving to complete a task, and computational modeling, like completing that same task with code, allows students to explore concepts from multiple perspectives. Discussion throughout these two phases, and iterative movement between them, takes hands-on learning to the next level. Rather than simply having their interest piqued by a novel driving experience, students can use driving as a strategy to deepen their understanding of coding concepts.
This transition is not simply about replacing one mode of interaction with another; rather, it is about iteratively refining their understanding of movement, control, and problem-solving.
For example, let’s continue with the above scenario. The teacher observes a different group troubleshooting their code with driving. When the group runs their project, they notice that the angle the robot is moving at is different than they intended, based on their driving. They then return to Drive mode, moving the robot slowly, to closely observe and measure the movement angle. They adjust their code and run it again, observing that the robot is now moving much closer to how it was driven.
Drive mode provides students with a tangible, embodied experience of how the robot moves in response to external inputs. When they begin writing code, they are no longer working in the abstract—they are building upon direct experiences that help them predict and refine their programming decisions. This iterative process of moving between hands-on interaction and computational modeling helps students:
- Identify and test variables affecting motion and navigation.
- Recognize patterns in movement and control that can be translated into algorithms.
- Make meaningful connections between manual control and sensor-based automation.
For example, back in our scenario – The teacher hears one group struggling to get started, because they have several ideas for how to complete the task. To help them decide which path to choose for their project, the teacher encourages them to drive all of the different ideas, and make notes about how quickly each completes the task. Based on this data, students can then choose the fastest path and begin the coding process. In their notebooks, they note blocks they think align with each driven behavior, making choices like when to move the robot with a Movement block, versus when to use the AI Vision Sensor to ‘get ball’ or ‘turn to barrel’.
By foregrounding activities with driving, educators create a structured learning environment where students engage in a cycle of observation, experimentation, and reflection—key practices for developing computational thinking. Rather than seeing driving and coding as separate entities, students come to understand how both contribute to a deeper, more flexible problem-solving approach.
Returning to our scenario – as students engage in completing the challenge, each group is moving back and forth between driving and coding to help them build a successful project. The teacher encourages students to move around the room to observe different driving strategies, ask questions about how to code different kinds of motion, or share ideas about coding choices. Students use driving as a problem solving strategy for their coding projects, making them more engaged and independent in the learning process.
This structured approach allows students to make meaning out of abstract programming concepts by connecting real-world interactions to coded behaviors, strengthening their ability to think algorithmically and apply computational reasoning to robotics challenges.
To learn more about VEX AIM, check out the VEX AIM section of the PD+ Community. Curious about how VEX AIM could fit into your classroom? Schedule a 1-on-1 Session to talk about it.