Re-use of Fuel Cell By-products

Project number
18035
Organization
Microsoft
Academic year
2018-2019
Microsoft is testing the implementation of fuel cell technology in order to lessen its dependence on traditional power supplies. Through the conversion of natural gas to power, solid-oxide fuel cells give off high-quality heat, highly pure water and gaseous carbon dioxide. The design created to use the fuel cell byproducts is a closed-loop hydroponic greenhouse system in conjunction with a heat-driven water-treatment system. The system sequesters excess carbon dioxide and provides a water source derived from the data centers. Full-scale implementation of this project will involve building greenhouses directly on top of the fuel cell-powered data centers. This will allow water to be purified at the site of the data centers and fed directly into the greenhouse system, creating a food-source for the surrounding community. The waste biomass from the greenhouse will be composted to create methane that will be fed back into the fuel cells.

Imaging Polarimeter Software Model

Project number
18034
Organization
UA College of Optical Sciences
Academic year
2018-2019
The Polarization Laboratory in the UA College of Optical Sciences recently received an RGB950 AxoStep polarimeter. The instrument consists of extremely bright light-emitting diodes, a polarizer, two rotating retarders, a second polarizer, and a camera. It operates at four wavelengths and has a particularly large field of view, so there is some angular variation in the performance, which this project seeks to understand. The RGB950 polarimeter was used to obtain Mueller matrices based on air measurements at various angles in order to characterize the instrument’s angular dependence. Additionally, software models for different optical elements tested by the polarimeter were developed and the Mueller matrix results were compared to Mueller matrix data. This includes models for a nonpolarizing cube beam splitter and a solid corner cube. Each model was built using Polaris-M, a library of polarization ray tracing functions developed by Airy Optics for use in Mathematica. The resulting software characterizing the polarimeter gives the lab a tool for predicting polarimetric measurements and verifying instrument performance.

Integrated Image Processing Unit for Automotive Cameras

Project number
18033
Organization
TuSimple
Academic year
2018-2019
The sponsor owns and operates autonomous trucks whose behavior is determined by a combination of software and hardware, and which use a single graphical processing unit, or GPU, for all eight on-board cameras. This design requires 1600 watts to process an image set and represents a single point of failure.The new design uses multiple redundant GPUs designed to handle up to three cameras. The GPUs provide the server on the vehicle with data about distances to objects and types of objects, as well as images that have been enhanced with respect to their brightness, gamma, contrast, boundaries, and denoising variables. The alleviation of computational power from a central unit allows for faster image processing before the image sets and data are sent to the on-board server responsible for vehicle control. This means the server can focus on vehicle control rather than image processing. If a failure occurs in one of these GPUs, the vehicle can continue moving safely. The system requires only 60 watts to complete the analysis of an image set and total power usage does not exceed 300 watts.

Cubesat Camera Space Ruggedization

Project number
18032
Organization
UA Lunar and Planetary Laboratory
Academic year
2018-2019
Pre-made space-qualified cameras are typically very expensive, and not viable for nanosatellite missions. Limited deep space communication is also a problem for such missions. CatSat will be used in a proof of concept for a high-bandwidth deep space communication system able to achieve high-definition video transmission from low-Earth orbit through a novel inflatable antenna. CatSat will also pave the way for the use of affordable commercial off the-shelf cameras on future missions.The CatCam camera system will image the deployment of the inflatable antenna and capture video of Earth. The data that CatCam captures will be sent through the inflatable antenna as a demonstration of high-bandwidth communication. CatCam has been designed, built and tested to withstand the harsh environment of space, and has been programmed with camera control software for easy integration into the rest of the CubeSat.

Vehicle Bi-ocular Digital Display Viewer

Project number
18031
Organization
MTEQ Night Vision and Electronic Sensors Directorate
Academic year
2018-2019
The window of an armored vehicle is the weakest point on the vehicle’s exterior. Eliminating the window would improve occupant safety, while allowing other vital information to be overlaid on top of the video feed. The vehicle biocular digital display viewing system receives a single video source input and displays the information to both eyes of the operator. The visual information sent to the operator is meant to mimic what they would see when looking out the front window of a vehicle. The design incorporates a custom optical design that uses achromatic doublet lenses, flat mirrors, and high-definition displays. The system is mounted in a custom housing consisting of a microcontroller, an adjustable interpupillary distance compensator, custom flexures, and custom lens mounts. The design uses custom software to correct for the inherent distortion of the optical system in real time. The viewing system effectively improves vehicle safety while providing the operator visual data on the environment around the vehicle with relevant vehicle metrics overlaid on the video.

Aviators Night Vision Augmented Display

Project number
18030
Organization
MTEQ Night Vision and Electronic Sensors Directorate
Academic year
2018-2019
Safety in military aviation depends heavily on the pilot’s ability to see and avoid traffic, obstacles and terrain. Night vision goggles are used by pilots to perform operations in low light conditions and are essential to the safety and performance of the aircraft. However, pilots are unable to view key aircraft performance parameters often shown on instrument displays, which are too bright for the goggles.The design subtly introduces critical flight information within the pilot’s field of view with an augmented display system, an avionics software package, a non-intrusive mounting unit, and a power system. A lightweight optical housing was developed and is mounted to the pilot’s helmet, allowing easy connection to vehicle power systems without introducing discomfort for the user. The software package was developed for easy implementation on a readily available lightweight computer, while human factors considerations were at the heart of each graphical design choice. System requirements were validated based on testing and modeling, to military specifications, for vibration, heat, acceleration, data transfer and electromagnetic interference.

Robotic Weeding Machine for Leaf Lettuce Crops

Project number
18029
Organization
UA Department of Biosystems Engineering
Academic year
2018-2019
In large-scale lettuce farming, farmers do not have an economical and sustainable method to control weeds and prevent them from emerging and competing with their crops. They rely on costly and tedious manual weed removal. The robotic weeding machine prototype consists of a pneumatic piston, stepper motors, an Arduino for control of actuators, and a laptop for the graphical user interface and software. The system uses parts created using computer numerical control for precise and accurate positioning of the weed removal apparatus, which is controlled with software for physical positioning corresponding to the user input. The user is provided with an easy-to-use graphical user interface on a touchscreen laptop for weed selection. The prototype and software have been constructed to be compatible with sponsor-owned weed-lettuce discrimination software to maintain continuity with the eventual mission, an autonomous weed identification and removal robot for lettuce fields.

Golf Course Pin Location Using Unmanned Aircraft

Project number
18028
Organization
FL Broyles, LLC
Academic year
2018-2019
Golf is a game of inches, so it is important to know exactly what conditions each hole brings. Every morning, golf course operations teams get to work early to mark the location of the pin and tee boxes for that day. The current process involves the team walking the course and approximating the location of the pin based on a pin sheet. This project uses an unmanned aircraft to autonomously mark the location on a green for a new hole to be cut. The system includes an unmanned aircraft equipped with a paint-dispensing attachment, a golf operations app to control the unmanned aircraft, and a player’s app to see daily course information. The paint dispensing subsystem includes a light sensor, a pump that pushes the paint through a nozzle, a microcontroller, and an external battery as the power source. All of these components are integrated in the 3D printed attachment mounted on the legs of the unmanned aircraft. The system is controlled through the golf operations app that inputs the desired GPS locations and monitors the status of the system. The golfers’ app contains hole information, including pin placement on the green and distances from the tee boxes. The apps were created and programmed using Java and, after extensive testing, the system accurately and efficiently marks the pin and tee box locations on a golf course.

Hyperspectral Imaging Smartphone

Project number
18027
Organization
Hellman Optics, LLC
Academic year
2018-2019
Hyperspectral imaging is used in fake currency detection, quantification of carbon dioxide concentrations in air, food safety, cancer screening, and many other applications. Hyperspectral imaging techniques require complex optical systems, making devices expensive and nonportable and restricting the impact hyperspectral imaging can have on society. The system designed trades optical system complexity for postprocessing requirements, enabling a compact assembly that, at quantity, can be produced for less than $10. The phone attachment minimizes the number of optical elements necessary for hyperspectral imaging while maintaining very wide tolerances, fully compensating for poor user alignment. The phone app provides a user interface and controls calibration,data capture, and processing. Once calibrated, the system captures raw data from a real-world situation, processes the raw data into hyperspectral image data, and uses the hyperspectral image data to make an informed decision about the real-world situation through machine learning.

Formula Racing Car Torsion Testing Rig

Project number
18026
Organization
UA Society of Automotive Engineers Student Chapter
Academic year
2018-2019
Currently, the only way to estimate torsional rigidity of a chassis is using SolidWorks finite element analysis. A torsion-testing rig will allow verification of the analysis and lead directly to a higher score in the design presentation at the yearly competition. The torsion-testing rig designed secures the two rear corners of the chassis while allowing the front of the chassis to pivot on a stand. A torsion load is applied to the front two corners of the chassis and the resulting displacement is measured at various points along its length. With the torsion load and resulting displacement known, the torsional rigidity of the chassis can be calculated. The torsion-testing rig designed has adjustable hub interfaces, a rigid rear constraint system built using a welding table, and weights to apply moment on any Formula SAE rules-compliant chassis. Dial indicators, accurate to 0.001 inches, were used to gather the chassis displacement data.

Get started and sponsor a project now!

UA engineering students are ready to take your project from concept to reality.