Reading a Technical Research Paper // Debugging, Proportional Control, and ROS Parameters

Today

  • Reading a Technical Research Paper (For Your Consideration)
  • Robot Debugging Strategies (Group Exercise)
  • ROS and Threading (Code Walk-Through)
  • ROS Params and Proportional Control (Coding Exercise)
  • Studio Time

For Next Time

What are Broader Impacts?

As we touched on during the first class meeting, and as you’ve started chewing on in the Broader Impacts assignment, robots – by virtue of being embodied in the world – uniquely engage with the world in a way that other forms of computing-based technology may not. “Robo-ethics” is a subfield within robotics that formally designs and ascribes methods of analyzing the impact of robotic systems. Some resources you might find interesting as a launch point to learning more about the robo-ethics field are below:

While robo-ethicists make it their primary vocation to understand the nuances of robotics use, every roboticist or robotics-adjacent engineer, manager, researcher, or entrepreneur deals with practical quandaries each day that require applying (implicitly or explicitly) a values-based framework – from the design of a particular interface (for whom will this interface be for? how much information from the backend should be legible?), selection of components (what is the lifecycle of this part? who is supplying this part?), and creation of design specifications (what is the intended use of the robot? how will that intended use be protected?).

Open-Ended Discussion - Broader Impacts and Robotics Funding

“Broader Impacts” is a term that attempts to make apparent the values or ethics based systems that people apply to the technology they produce or work that they engage in. The National Science Foundation (NSF) in the US requires a “broader impacts” component to research proposals, where the definition of a broader impact is expansive, and may cover:

  • Public engagement in the work to be completed
  • Developing partnerships across academic / industrial sectors, or across disciplines
  • Explicitly working on a project that contributes to societal well-being
  • Contributing to national security
  • Ensuring participants are representative of various backgrounds and their engagement is inclusive

Let’s have a look at a broader impacts statement from a NSF proposal submitted to the NSF National Robotics Initiative solicitation, entitled “Never-ending Multimodal Collaborative Learning”, which proposes to develop algorithms for robot-learning and task-generalization through natural language and visual/kinesthetic demonstrations performed by a human teacher:

The National Robotics Initiative was a three part solicitation over a decade that aimed to support fundamental research in the US. This flavor of the NRI was aimed at “Ubiquitous Collaborative Robots” and research to advance the development and use of co-robots (robots that work with/near people). Ubiquity was defined as “seamless integration of co-robots to assist humans in every aspect of life.” You can read the full solicitation here. This program was sunset in 2022.

The proposed research will reduce the cost of programming robots and other technology, such as personal
assistants. Non-experts will be able to program and personalize robots similarly to how we program
fellow humans and especially children: by communicating in natural language (e.g., "stop fidgeting") and
demonstrating visually the desired way to do things (e.g., "open it like this"), as opposed to being
programmed by writing code or through millions of positive and negative examples. Robots will be able
to acquire new concepts and skills adapting to individual users' needs through interaction with end-users,
as opposed to maintaining a fixed set of functionalities predetermined at the factory.
The simplicity and directness of grounded natural language interfaces will help robots better serve older
adults and people with disabilities. This is just one example of the proposed technology's potential for
social good. This research is tightly coupled to the educational program of the PIs, which currently
includes a course on language grounding on vision and control, and another on architectures for never-
ending learning, with the goal of teaching students that there is more to AI than learning from a large
number of positive and negative examples.

Get together with some folks around you, and consider the following questions (~10 minutes):

  1. Using the definitions from the NSF broader impacts page, what key themes do you see emerge in this paragraph?
  2. What evaluation metrics would you use to assess whether the broader impacts goals were met over the course of this project?
  3. Are there other broader impacts that the authors don’t mention, but might be relevant to their project? Are there any unintended impacts of the work that could be considered by the project?

We’ll do a brief report out (~5 minutes) with the whole class.

Brainstorming Robot Debugging Strategies

Debugging is the act of incrementally testing code for accurate behavior and tracing errors back through the system to resolve them. You may have encountered some debugging strategies in SoftDes. Some generic strategies for debugging software carry over to robotics programming, while novel methods may need to be included given the interaction software has with hardware.

Group Discussion

Take 10 minutes to come up with some debugging strategies for writing robotics code with the folks around you, then we’ll share out to the class. As a motivating example, let’s consider the part of the Warmup Project where you have to create a person follower.

Here are some areas to consider in the debugging / development lifecycle:

  1. How do you ensure your code is correct (implements the strategy you expected)?
  2. How do you test your approach to see if it performs the task effectively (e.g., follows a person)?
  3. How might you tune the parameters of your approach to make it perform as best possible?

ROS and Threading

A “thread” is an independent flow of execution in a computer program; “threading” refers to creating multiple concurrent pathways for execution. When using ROS2, the concept of threading can arise in multiple ways, but one common one is when we utilize our subscription callbacks and create running loops within our code.

We’ll go over some points regarding how ROS2 deals with different threads of execution. In order to structure our work, we’re going to be looking at two pieces of sample code:

Note: you might find looking at these pieces of code quite useful for the Warm-Up Project!

Why would we want to perform threading, as opposed to timing (as we have been)?

  • Avoiding “blocked” callbacks: in sequential execution, the callbacks are executed one at a time, and blocked from being triggered until the previous has finished. If we happen to put a lot of “work” in a callback, we could delay execution down the line. Threading avoids this issue (kinda…in Python there isn’t truly a way for parallel processing, but nonetheless execution can overlap which can be incredibly helpful.).
  • Timing can be fraught: if we sent a timer, then anything in that loop must execute within that time or weird / unintended behavior can occur. Threading allows callbacks to occur at their own time.
  • Threading gives us control of information flow: within ROS2, the use of threading allows us the flexibility to choose what callbacks occur when in execution (and therefore what work or data is consistently protected from deadlocking).

If you want to learn more, this conversation on the ROS discourse is an excellent source!

Proportional Control

So far we’ve programmed robots to choose between a small set of motor commands (move forward, stop, etc.) based on sensor readings. Today, we will be experimenting with setting the motor command proportional to the error between the robot’s current position and the desired position.

To get the idea, program the Neato to adjust its position so that it is a specified (target) distance away from the wall immediately in front of it. The Neato’s forward velocity should be proportional to the error between the target distance and its current distance. It’s tricky to get the sign correct, run through a few mental simulations to make sure the robot will move in the right direction.

Note: you might be interested in adapting this in your Warm-Up project wall-follower or person-follower code!

To get started, create a package somewhere in your ros2_ws/src directory for your work. In this example, we can put the package directly in ros2_ws/src/class_activities_and_resources directory then rebuild the workspace:

$ cd ~/ros2_ws/src/class_activities_and_resources
$ ros2 pkg create in_class_day04 --build-type ament_python --node-name wall_approach --dependencies rclpy std_msgs geometry_msgs sensor_msgs neato2_interfaces
$ cd ~/ros2_ws
$ colcon build --symlink-install
$ source ~/ros2_ws/install/setup.bash

You may have noticed at this point that ROS requires a certain amount of boiler-plate code to get going. If you are having trouble with this, or would rather skip ahead to the proportional control part, you can grab some starter code for wall_approach.py.

A helpful tool for visualizing the results of your program is to use rqt. First, start up the GUI:

$ rqt

Next, go to plugins -> visualization -> plot.

Type /scan/ranges[0] (if that is in fact what you used to calculate forward distance) into the topic field and then hit +.

Tip: to change the zoom in the plot, hold down the right mouse button and drag up or down on the body of the plot (it’s pretty finicky, but it does work).

You can use this link to find a sample solution to this task.

Getting Fancy: ROS Params

To make a node more configurable, you can use ROS Params, which allow us pass in arguments to a node from the commandline (or control them through tools like rqt). This is super powerful, because it can let you, in real time, adjust your robot performance and behavior without killing, re-writing, and re-running your nodes. For proportional control, we could set our proportional coefficient and our wall distance in this way.

For instance, if you follow the documentation you can create a node similar to our sample solution, wall_approach_fancy.py that supports the following customization via the command line:

$ ros2 run in_class_day04_solutions wall_approach_fancy --ros-args -p target_distance:=1.5 -p Kp:=0.5

Here is a demo of the script, wall_approach_fancy.py that uses ROS parameters as well as the tool dynamic_reconfigure for easy manipulation of various node parameters.

Note that in order to support dynamic_reconfigure in your nodes, you have to call add_on_set_parameters_callback and implement an appropriate callback function (see sample solution for more on this).

An animated Gif that shows a robot attempting to maintain a particular distance from a wall.