Radio Frequency Identification (RFID) in Robotics

Passive Ultra-High Frequency (UHF) RFID tags are well matched to robots’ needs. Unlike low-frequency (LF) and high-frequency (HF) RFID tags, passive UHF RFID tags are readable from across a room, enabling a mobile robot to efficiently discover and locate them. Because they don’t have onboard batteries to wear out, their lifetime is virtually unlimited. And unlike bar codes and other visual tags, RFID tags are readable when they’re visually occluded. For less than $0.25 per tag, users can apply self-adhesive UHF RFID tags throughout their home.

UHF RFID Hardware

Two <a “href=”””>ThingMagic Mercury 5e (M5e) UHF RFID modules form the core of the robot’s RFID sensors. One is connected to two body-mounted, long-range patch antennas that can read UHF RFID tags out to ~6 meters. The other is connected to custom, short-range, in-hand antennas embedded in the robot’s fingers that can read the exact same UHF tags within ~30cm of the robot’s hand. The hardware is annotated in the figure below.



We have demonstrated a number of capabilities enabled by RFID sensing. The following is a brief list. Refer to the publications for a more detailed information:


Featured Videos

PPS-Tags: Physical, Perceptual and Semantic Tags for Autonomous Mobile Manipulation: Moderate level of environmental augmentation facilitates robust robot behaviors.

Additional Videos:




Our work is generously supported in part by the Health Systems Institute and by Travis’ NSF Graduate Research Fellowship (GRFP).


Additional Videos

RF Vision: RFID Receive Signal Strength Indicator (RSSI) Images for Sensor Fusion and Mobile Manipulation: Long-range UHF RFID sensing and multi-sensor fusion for mobile manipulation.

A Benchmark Set of Domestic Objects for Service Robot Manipulation

The following everyday, household objects are ranked based on interviews with 25 ALS patients from the Emory ALS Center. We asked the participants to provide relative importance of retrieval of each object by a robot. Although the objects list was validated by population of ALS patients and focused on object retrieval, we hope the object lists provide a common set of practical objects for evaluating wider manipulation and grasp strategies.

The object list in this web page is based on the latest data collected until February 2009. For previous object lists, please refer following resources:


rank image orientation name procurement
TV Remote
Medicine Pill
See #4 (Medicine Bottle)
Cordless Phone
Medicine Bottle
Local Convenience Store
See #5 (fork)
Cell Phone
Local Convenient Store
Hand Towel
Cup / Mug
Local Convenient Store
Disposable Bottle
Shoe / Sandal
Dish Plate
Pen / Pencil
Table Knife
See #5 (fork)
Credit Card
Local Convenience Store
Medicine Box
Local Convenience Store
Plastic Container
Local Convenience Store
Non-Disposable Bottle
Small Pillow
Local Convenience Store
Local Convenience Store
Walking Cane
Wrist Watch


Robotic Playmates

When young children play, they often manipulate toys that have been specifically designed to accommodate and stimulate their perceptual-motor skills. Robotic playmates capable of physically manipulating toys have the potential to engage children in therapeutic play and augment the beneficial interactions provided by overtaxed care givers and costly therapists. To date, assistive robots for children have almost exclusively focused on social interactions and teleoperative control. This project represents progress towards the creation of robots that can engage children in manipulative play.

Alex Trevor and Prof. Charlie Kemp in collaboration with Prof. Ayanna Howard and the HumAns Lab are the main investigators on this project which has been generously funded by the Center for Robotics and Intelligent Machines (RIM@GT).

Teleoperation for Mobile Manipulation

Force Feedback Teleoperation on Performing Hygiene Tasks

We expect that haptic teleoperation of compliant arms would be especially important for assistive robots that are designed to help older adults and persons with disabilities perform activities of daily living (ADL). Research has shown that brushing teeth, shaving, cleaning and washing are high priority hygiene tasks for people with disabilities. We describe a teleoperated assistive robot that uses compliant arms and provides force feedback to the operator. We also present one of the first user studies to look at how force feedback and arm stiffness influence task performance when teleoperating a very low stiffness arm.

Teleoperation System

The teleoperation system consists of a master console and a slave robot. The slave robot is Cody (Fig. 1a) and we designed and attached a flat, 3D-printed, spatula-like end effector (Fig. 1b) to resemble an extended human hand. We attached white board eraser felt to the bottom of this end effector. The master console (Fig. 1c) consists of two PCs and a pair of PHANToM Omni (Sensable Technology) haptic interfaces that provide force feedback in position only.


Featured Videos

Force Feedback Teleoperation of Cody to perform a Cleaning Task: An operator teleoperating Cody to perform a simulated hygiene task by cleaning dry-erase marks off a mannequin.

Effects of Force Feedback and Arm Compliance on Teleoperation

We conducted a pilot study to investigate the effects of force feedback and arm compliance on the performance of a simulated hygiene task. In this study, each subject (n=12) teleoperated a compliant arm to clean dry-erase marks off a mannequin with or without force feedback, and with lower or higher stiffness settings for the robot’s arm. Under all four conditions, subjects successfully removed the dry-erase marks, but trials performed with stiffer settings were completed significantly faster. The presence of force feedback significantly reduced the mean contact force, although the trials took significantly longer. Refer to the publications for a more detailed information:


The mean contact forces for each block: FC block uses the compliant arm with force feedback; FS block uses the stiffer arm with force feedback; NC block uses the com- pliant arm without feedback; and NS block uses the stiffer arm without feedback. Error bars show standard error of the mean. Bars with the same letter were not significantly different, while A and B were (p<0.01).


Histogram of the mean completion time: all trials with force feedback (FB) ver- sus without force feedback (No FB); all trials using the compliant setting (Comp) versus trials using the stiffer setting (Stiff). Error bars show standard error of the mean.




Our work is generously supported in part by the NSF grant IIS-0705130.



HumAnS Lab

Clickable World Interface

Laser Pointer Interface

We have developed a novel interface for human-robot interaction and assistive mobile manipulation. The interface enables a human to intuitively and unambiguously select a 3D location in the world and communicate it to the robot. The human points at a location of interest and illuminates it (“clicks it”) with an unaltered, off-the-shelf, green laser pointer. The robot detects the resulting laser spot with an omnidirectional, catadioptric camera with a narrow-band green filter. After detection, the robot moves its stereo pan/tilt camera to look at this location and estimates the location’s 3D position with respect to the robot’s frame of reference.

Unlike previous approaches, this interface for gesture-based pointing requires no instrumentation of the environment, makes use of a non-instrumented everyday pointing device, has low spatial error out to 3 meters, is fully mobile, and is robust enough for use in real-world applications.

A Clickable World

When a user selects a 3D location, it triggers an associated robotic behavior that depends on the surrounding context. For example, if the robot has an object in its hand and the robot detects a face near the click, the robot will deliver the object to the person at the selected location. In essence, virtual buttons get mapped onto the world, each with an associated behavior (see image above). The user can click these virtual buttons by pointing at them and illuminating them with the laser pointer.

In our object fetching application there are initially virtual buttons surrounding objects within the environment. If the user illuminates an object (“clicks it”) the robot moves to the object, grasps it, and lifts it up. Once the robot has an object in its hand, a separate set of virtual buttons get mapped onto the world. At this point, clicking near a person tells the robot to deliver the object to the person. Clicking on a tabletop tells the robot to place the object on the table. While clicking on the floor tells the robot to move to the selected location.

This project is funded by the Wallace H. Coulter Foundation as part of a Translational Research Partnership in Biomedical Engineering Award, “An Assistive Robot to Fetch Everyday Objects for People with Severe Motor Impairments”.


EL-E Retrieving an Object

EL-E executing actions triggered by a user in a sample object fetching application. In this video, Cressel Anderson, an able-bodied robotics researcher sitting on a wheelchair to simulate a user with motor impairments, first commanded EL-E to grasp an a cordless phone, then bring it back. After this, the robot was instructed to retrieve a pill bottle from the table, drive to a specified location, then set the object on a table next to the user.  —  Oct 14, 2008.

EL-E Retrieving from a Coffee Table

EL-E grasping from a coffee table. Video made by Advait Jain.  —  Nov 7, 2008.

EL-E Retrieving an Object

Initial prototype demonstration of EL-E retrieving objects designated by the user using a clickable world interface.


A Clickable World: Behavior Selection Through Pointing and Context for Mobile Manipulation, Hai Nguyen, Advait Jain, Cressel Anderson, and Charles C. Kemp, IEEE/RJS International Conference on Intelligent Robots and Systems (IROS), 2008.

A Point-and-Click Interface for the Real World: Laser Designation of Objects for Mobile Manipulation, Charles C. Kemp, Cressel Anderson, Hai Nguyen, Alex Trevor, and Zhe Xu, 3rd ACM/IEEE International Conference on Human-Robot Interaction (HRI), 2008.