This past weekend, Tech3 demonstrated at the All-Earth Ecobot Challenge at Humble Civic Center in Houston. Our CTO, Bob Hodge organized our involvement and did an amazing job coordinating and hosting our presence there! You can read our pre-event blog here.
We were honored to be a part of such a fantastic event that encourages innovation and engineering in youth! We want to thank the Education Foundation of Harris County again for reaching out to us to be a part of this spectacular event. We had a great time engaging with the student participants and hearing what they have to say! They were so enthusiastic and had many questions on everything from how specific pieces of equipment work to their ideas on how to put a nuclear power source on a drone! Many of the teachers and school administrators were likewise engaged and would like to have Tech3 come to their schools to provide the same experience and participate in their career days. Tech3 will continue to be involved with the All Earth Ecobot Challenge in the years to come!
Named the Eco-Genius Lab, we had four exhibits at the event, showcasing how you can take images of the real world, convert them to a 3 Dimensional Model, analyze that model, and then present that model in a virtual environment or create a scaled, physical model. The four steps representing the real world as a 3D model are:
1. CAPTURE – This is the first step in the process of modeling the real world digitally in 3 Dimensions – digitally capturing the real world. This lab included cameras, drones, and a digital recreation that can be virtually toured. We first showed our Osmo camera and it is mounted on a gimbal which allows the camera to pivot on 3 axis (pitch, yaw, and roll) which keeps it stable when it is being moved around by the photographer or drone. This second camera we demonstrated is called a Theta S and it is special because it can take a spherical picture. The camera has a “fisheye” lens on both sides of the camera, allowing it to be panoramic. The drones allow us to safely position the camera high in the air so we can get lots of digital images of our subject from many positions around the object.
2. RECONSTRUCTION – The second step is about converting pictures into 3D computer models. Here we take the pictures obtained in Step 1 and use sophisticated software to combine those 2D images into a 3D model. Using software and the equipment shown in the Capture station we can begin to create accurate 3D models of the environment around us using advanced mathematical formulas. Luckily for us there is software available to use that makes this process rather simple. Basically, if you are to break down each image pixel by pixel you want to have at least three images to have the same object in them from different angles. The more pictures the object has, the higher detailed the render of that object will be when the process is complete.
3. ANALYZE – The third step is about manipulating the model to get answers. Here, we take the 3D model generated in step 2 and begin asking questions, examining the model, taking measurements, and otherwise visually exploring the model. This lab included 4D demonstration, including delving into the human anatomy and virtually dissecting a heart, using the Sprout from Hewlett Packard. This is a new class of computer that bridges the 2D and 3D worlds by combing touch screen & pad, cameras, and projector to create an immersive experience. By using this type of computer, you can now visually examine your model.
4. PRESENTATION – The fourth and last step is about sharing the model with others and “recreating” the real world. At this point we take the 3D model generated in step 2 and create presentations to share. These presentations can be in various forms including 3D print, Virtual Reality, Augmented Reality, and more. Our lab demonstrated a 3D printer and Virtual Reality goggles. 3D printing is a process for making a physical object from a three-dimensional digital model, typically by laying down many successive thin layers of a material. Virtual reality is the computer-generated simulation of a three-dimensional image or environment that can be interacted with in a seemingly real or physical way by a person using special electronic equipment, such as a helmet with a screen inside or gloves fitted with sensors. It can be used in almost any industry to offer unique data visualization and experiences.
You can view our Facebook album for the event here.
Connect with us!