Robotics Session
12:00pm-2:00pm, February 25 on GatherTown
The Annual CSL Student Conference invites students and researchers to exhibit their work on robots and related hardware in an event designed on the lines of a poster session. This session provides a platform for students across the campus to gain exposure for their research and to disseminate and exchange ideas. The aim is to stimulate interaction among researchers and hopefully foster a close bond in the robotics community of our university.
Participants
Kuan-Yu Tseng
“GRILC: Gradient-based Reprogrammable Iterative Learning Control for Autonomous Systems”
We propose a novel gradient-based reprogrammable iterative learning control (GRILC) framework for autonomous systems. Performance of trajectory following in autonomous systems is often limited by mismatch between a complex actual model and a simplifed nominal model used in controller design. To overcome this issue, we develop the GRILC framework with offline optimization using the information of the nominal model and the measured actual trajectory, and online system implementation. In addition, a partial and reprogrammable learning strategy is introduced. The proposed method is applied to the autonomous time-trialing example and the learned control policies can be stored into a library for future motion planning. The simulation and experimental results illustrate the effectiveness and robustness of the proposed approach.
Zhe Huang
“Human-Robot Collaboration in Industrial Assembly Tasks”
Safety and efficiency are two main goals of human-robot collaboration in industrial assembly tasks. We developed vision and contact based algorithms to enforce safety for human and robot coexistent scenes. Human intention estimation is implemented to infer desired goals of human to guide the robot to perform tasks safely and efficiently. All the demonstrations are created with a UR5e robot.
Shivani Kamtikar
“Visual Servoing for Pose Control of Soft Continuum Arm in a Structured Environment”
For soft continuum arms, visual servoing is a popular control strategy that relies on visual feedback to close the control loop. However, robust visual servoing is challenging as it requires reliable feature extraction from the image, accurate control models, and sensors to perceive the shape of the arm, both of which can be hard to implement in a soft robot. This video is a demonstration of our method which circumvents these challenges by presenting a deep neural network-based method to perform smooth and robust 3D positioning tasks on a soft arm by visual servoing using a camera mounted at the distal end of the arm. A convolutional neural network is trained to predict the actuations required to achieve the desired pose in a structured environment. A proportional control law is implemented to reduce the error between the desired and current image as seen by the camera. The model together with the proportional feedback control makes the described approach robust to several variations such as new targets, lighting, loads, and diminution of the soft arm. Furthermore, the model lends itself to be transferred to a new environment with minimal effort.
Jilai Cui
“Simulation of octopus arm movements”
The arms of the octopus have high flexibility, elasticity, and independence, which makes them a potential model for developing soft-body robotic arms. However, the mechanism of how octopuses control the segments in each arm and make inter-arm coordination remains unknown. I developed a model for simulating arm movements, which can make goal-directed reaching behaviors based on the odor information of the target. Each of the arms consists of several segments, which have a set of chemo sensors and neuronal interaction with other segments. The chemosensory information of each segment is integrated using the Lateral Inhibition network, which coordinates the whole arm. The arms can also perform rhythmic movements, which is based on the simulation of the Central Pattern Generator. In the future development, we can fit this model in electrophysiological data recorded from real octopuses to optimize arm movements and achieve more complex behaviors. This model can help us explain how the octopus coordinates its eight arms and accomplish different tasks by a simple arm structure, which can inspire the design of bionic robotic arms. This model can also be used for simulating other forms of skeletons, such as brittle stars and lampreys.
Sahil Bhandary Karnoor, Avinash Subramaniam, Eric Dong
“Indoor Navigation with Acoustic Augmented Reality Glasses”
We aim to design an audio-based indoor navigation system that plays sounds intended to appear to arrive from a specific direction – by following this sound, you would arrive at your destination. In realizing this end-to-end system, we must solve two critical technical challenges: (1) indoor localization and (2) real-time spatial sound synthesis. We intend to solve indoor localization using IMU sensors embedded in earable devices, combined with ambient WiFi signals. We want to improve Head-Related transfer functions (HRTF) estimates for spatial sound through personalization.
IMU-based localization involves pedestrian dead-reckoning (PDR), integrating accelerometer and gyroscope measurements, and performs reasonably well over short distances. Wifi-fingerprinting provides imprecise but absolute positional measurements, and complements IMU-based PDR by correcting error accumulated over time. On the other hand, spatial sound is simulated by filtering pre-recorded phrases to appear directional and externalized using personalized HRTFs.
Our system is implemented on a smartphone and custom-designed smartglass platform. The smartphone performs indoor localization using IMU data from the smartglass and WiFi fingerprinting. Meanwhile, the smartglass provides integrated earphones to play directional audio cues. We expect to demo the system in CSL as part of our presentation.
CONTACT US
For more information, please contact the session chair, Yashaswini Murthy.