This week, we learned how to scan with the Kinect hardware attached to the iPad. Kinect is a motion-sensing input device that Microsoft developed for Xbox. I was surprised that it could so easily be used for 3D scanning. However, the hardware and app we used to scan each other worked really well! Below is a picture of the Kinect and stand used for Xbox.
When the Kinect was attached to the iPad, it looked like this:
We had to move slowly around the subject we were scanning. It was important to have good lighting and no objects that could distract the sensor in the background.
Above is a picture of someone being 3D scanned. Since we were having trouble uploading new images, I’ve just attached photos from Google. So the above isn’t someone in our class!
We learned how to use Meshmixer to refine the scan and get it ready for printing. I wasn’t able to print mine this week, but hope to be able to do it some time in the next.
This week, my group also made our Project Testing Plan and began assembly of our people counter. For project testing, we will use other students and a contact at BrewLab as well as CU At Home.
Our first prototype will not use the laser sensors. Instead, it will use a push button. It will count up every time the button is pushed. Once we figure out how to connect all the wires and upload the code, we will move on to the more complex laser sensors.
One other thing I learned this week is that an OBJ file carries color with it. The hope is that this will be the standard file in the future, but some softwares can’t handle the color and will convert it to monochrome.
Good thoughts. I wasn’t aware that we were using the Xbox Kinect to scan our faces. It’s great to see products that we are able to modify to accomplish other goals. I also found that making multiple circles around the scanned object improves the final result. Doing this helps reduce the time spent touching up the smaller details.