For our project, we decided to create a bot that will help the MakerLab. The main feature of the bot is to greet people when it senses someone walking up to it and record the person’s idea/story or any feedback they have and send the video/audio to a an SD card/USB so Vishal can receive feedback on things going on in the Makerlab. When people enter the Maker Lab for the first time, they may be overwhelmed by all the things that are going on, including the 3D printing machines buzzing while at work and people making their own designs on the computers. And when they leave, there is little to no interaction with them after they have created their ideas or made their products. The goal is to capture that as feedback for future improvement of the MakerLab and also for publication of these stories. The droid will increase the efficiency of collecting data in the MakerLab by using its cute appeal and its enticing ability to speak, record audio, and record video.
To do these we came up with multiple ideas and had to learn from different perspectives by testing, different ideas like blabdroid and different technologies like the IBM TJ bot/Watson Services. All these culminated in our final product called the Makerlab bot which works mainly through a raspberry Pi Model 2 board that runs python 3 code that Taofik wrote up. This code runs the electronics setup which consists of a camera module that performs the two functions of detecting motion using ‘Blob Detection Image Matching’ and also recording the video of the user’s responses. The hardware setup also includes a USB microphone to record the audio that goes with the video, there is a speaker to draw the user’s attention and then play the prompt to the user. We also got user feedback to include a way to let users know what is going on with the bot so we made the bot play a startup sound and included a green LED light and an LCD 16X2 display to let the user know what is going on and what mode the Bot was in such “Motion Detection Mode” and “Recording mode”. We had a couple more materials that we used to make all the connections to the components such as jumper cables, pins, T-wedge pin I/O and its pin extensions for the raspberry Pi, and a breadboard. So it would record the user’s response and go back into motion detection mode as a continuous loop. We had intended to include a button or two to allow the user to turn off/stop the bot mid but the connection came apart. On the raspberry pi is a wireless USB keyboard to send the bot commands, a wifi dongle to allow a computer connect to it headlessly/wirelessly, a USB drive to save the video and audio which can easily be removed for Vishal to go through all the user responses at the end of the day of recording user interactions.
In the first step we started out making sketches. Our first sketch was more rudimentary and includes only a box. As basic as it is, we already had a rough idea on what to include as outer material and a jist of what was needed inside to make the bot function. However, we also had a rough idea of where each item would go, such as the arduino on top, or that we wanted to use laser cut wooden box for material. The second sketch we made it more friendly-looking and added a more cute appeal to increase the efficiency of interactiveness. We rearranged the materials to better fit the shape of the bot.
After brainstorming and putting together our ideal design, we started prototyping. In the first segment of our prototyping, we used cardboard cutouts for the head and body of the bot. Initially, we 3D printed the inner frame of the bot. However, due to the frame being too thin, it became warped when it cooled down from the printing. We foresaw that it could cause problems by being too fragile and inflexible and decided against using it for our final product.
Here is a video of us testing the product out:
Here are our slides:
If the link above does not work:
https://docs.google.com/presentation/d/1isyvJkLPwCmRHM2kKYCwS3qcsUCf8dkUi7VeSK8skCI/edit?usp=sharing