2+ Community: Pipeline To Better Placemaking
"This project is the next step in Tommy James’ objective of “Using Realities to Observe, Teach & Engage in the Creation of Place”. The first step in Dr. James’ “Pipeline to Better Placemaking” is measuring place which is also the focus of this project. Measuring place is just one aspect of the pipeline, there are various projects Dr. James is currently working on which include AR/VR, digital engagement, design, and eventually building of places. The data gathered from measuring the place will provide our sponsor with the necessary information to effectively and efficiently carry out these future projects. By improving both testing as well as information analysis, users will have the ability to increase their understanding of what places and what changes, if any, need to be performed. Understanding what people feel and how their feelings change when they visit certain places will allow our sponsor and his colleagues the ability to ensure that communities are created and improved to be safer and more enjoyable. "
Accelerating Topology Optimization with Machine Learning
Topology Optimization is a process that generates a physical object that uses less material while still maintaining it's structural stability, and in that sense it is an optimal part. Our project reduces the amount of time required to perform this optimization with the aid of machine learning. This work is beneficial to the research community as a whole, as well as our sponsor, DEVCOM. It works by replacing parts of the optimization process via a machine learning model, or other quick shortcuts, reducing the overall time required to generate a part. It is different because our problem deals with the minimization of mass with respect to compliance and stress, while many pre-existing problems within this field commonly deal with compliance minimization with respect to mass and stress. Our problem also includes the added complexity of cylinders within the problem space, which act as anchors for our force loads. The applications of this project are numerous, as it not only serves as a foundational solution to this specific kind of topology optimization problem class, but also serves as a means to create a speed up in part generation overall.
The Lockheed Martin DroneSim Advanced Autonomy project is centered around deep reinforcement learning and object tracking. Deep reinforcement learning is being used in a myriad of cutting-edge applications and is widely seen as the next big step for vehicle autonomy. Without deep reinforcement learning, the amount of training data and time necessary to achieve the desired outcome can often be a barrier. Our project addresses this broad issue by demonstrating how deep reinforcement learning can be used to train a drone to operate autonomously in a fast and efficient manner. The Advanced Autonomy senior design project uses Microsoft AirSim and Unreal Engine 5 to deliver a modern, feature rich, simulation environment for drone training. Although our project specifically deals with quadrotor drone path traversal, object detection and target following, nearly any vehicle autonomy use case can be trained and implemented with the underlying technologies used in our project. A key benefit of deep reinforcement learning in an accurate simulation environment is the ability to use the trained models on real vehicles.
AlgoQuant: Retail Friendly Algorithmic Trading
With our product, AlgoQuant, we are aiming to make Algorithmic Trading more available to non-professional individual investors (retail investors). This is a field that is currently dominated by investment funds, banks, and institutional traders who possess all the resources and expertise that retail investors lack. AlgoQuant is a Web and Mobile application where users can easily configure and allocate money (initially simulated money) to automated investors that use statistical analysis and machine learning models to make intelligent trading decisions. Users can also back-test their configured investors to gain insight on how their trades may have performed in the past using historical data. Products such as AlgoQuant have recently been made possible with the expansion of developer friendly APIs for collecting large amounts of market data and programmatically executing trade orders. The landscape of Algorithmic Trading has seen large growth over the years with no signs of slowing down. According to an article published in 2021 by Yahoo Finance, “There are many fin-tech platforms springing with the singular goal of democratizing institutional-grade technologies to all traders” . The potential of products such as AlgoQuant is recognized by many, and we believe we are at the forefront of this movement. . https://www.yahoo.com/video/algorithmic-trading-accessible-retail-traders-163330153.html
AnimELLE Crossing is a creative open world game that invites students learning a diversity of languages to interact with characters and objects to complete objectives. This game is aimed towards college students attempting to learn a language, however, this game is created to be scalable and can be geared towards different audiences. AnimELLE Crossing is different from other typical language learning games as it aims to make the language learning process fun and immersive by providing different mini games to help players along the way.
Augmented Reality Medical Education Guided Tour
The medical education department at UCF wants a more engaging and interactive way to conduct tours of the Medical Education Building. The building is a technologically advanced facility and introducing potential students and investors with an AR experience will serve as a much more impressive first impression. The tour runs on the HoloLens 2, which is a mixed reality headset. The app recognizes tour stops and displays the associated information to the user. The contents of a tour stop include 2D articles and videos, hologram recordings of faculty members, and interactive 3D objects. There is currently no available augmented reality tour experience for the medical education building. This project could possibly be done as an AR smartphone app, but having the app be on the HoloLens 2 would make it more immersive and be a mixed reality experience instead of just augmented reality. The base of the project is object recognition and information display in multiple forms. These basic functions can easily be applied to other tours, information sessions, explorative environments, training sessions, and more.
aVRage Paths: Virtual Reality Researcher Toolkit
Virtual Reality research has skyrocketed over the past decade – especially at UCF. However, the quantity and quality of toolkits available to conduct this research have not seen growth in the same way. aVRage Paths provides a highly versatile toolkit to Virtual Reality researchers that can be used out the box – or specially customized using our modular system design. Our implementation of a novel maze generation algorithm utilizes so-called “impossible spaces” to create a randomized yet statistically-controllable environment for researchers to get concrete data from. Researchers are then equipped with an interactive dashboard that gives them an overview of user studies as they conduct them through our toolkit. Various systems such as telemetry and rich playback capabilities enable insights that further aid in creating effective research studies.
Bar Butler: Inventory Management Software
Even today, companies use ancient methodologies and paper trails to track products and stock. Not only is this a tedious methodology that requires capable workers to waste time on menial tasks, but it’s also prone to potential human error that could go unchecked. This is an area ripe for development and the adoption of technology. A software application that could digitally track inventory using barcodes and lot numbers rather than human counts for input would greatly facilitate easier handoffs between internal departments as well as customers. The goal of Bar Butler is to create an inventory management system (IMS) that allows for convenient tracking of products and the individual components that they are composed of, to streamline the product development process and enable efficient and effective work. It will allow quality control and shipment tracking all in one place as well. While other inventory management software can be difficult to configure to a company's exact business needs, this will be easy to integrate with our sponsor's existing workflow, making their work more efficient than before.
bePlayFuel: Smartwatch Sensor Logging Feature for Application
Our project is an additional feature to the existing bePlayFuel application which allows users to connect their smartwatch to record and log their activity simultaneously. This product is for athletes, specifically water skiers at the moment, that would like a way to measure their performance within a sport and how they can improve it. It works by starting recording and using the sensors on an Apple Watch to collect data and then the API calculate/filters that data. It produces max acceleration, max speed, and max ski angle which are displayed on the phone application. It is better than what is available because this type of sensor logging isn't that accurate on other applications and it not simultaneous with video either. Overall, it's a great one-stop-shop to manage ones activity.
BrainBeats: Version 4
BrainBeats is a web application built for converting electroencephalogram (EEG) into Musical Instrument Digital Interface files (MIDI) which can be converted to generic audio files (.mp3, .wav, etc.). Audio files created by users can then be uploaded, edited, shared, and downloaded on BrainBeats’ web platform with a BrainBeats user account. The functionality of this platform allows you to create a script for your recording session, record your song with an EEG headset utilizing your own unique musical settings, posting your recorded songs, downloading the MIDI equivalent of your song, and creating playlists with music you enjoy. The platform allows you to connect with other users creating music with their own EEG headset, and to interact with their posts.
Our project aims to provide a better learning environment for simulating a human heart and interacting with it. Our VR application designed for Meta Quest 2 will be a port of a 1.0 version (cARdiac) created in AR for Microsoft HoloLens. This project will include an interactable heart model, flashcards, case quizzes, and additional heart related information. UCF College of Medicine has sponsored this project with the plan of using it as a study tool for M1 and M2 students. To use it, a medical student will launch the Cardiac VR app from one of UCF COM's Quest 2 headsets, login, and proceed to interact with the provided study tools. Participation data is relayed to instructors via database. This VR approach was chosen as it is more cost-efficient than the previous AR technology, it has easier and more intuitive controls, and a VR environment provides more implementation options. Our group is focused on adapting the 1.0 app for VR, fixing previous bugs and issues, adding database interactivity, and improving upon modules. As for other applications, there have been talks to expand on this app in future projects to include other body simulations as well.
cARdiac-AR 2.0: AR app on cardiac disorders
cARdiac 2.0 is an interactive, educational mixed reality tool that displays a high-quality 3D model of the human heart. It is designed for medical students at UCFCOM. They will be conceptualize the material covered in class and see how different cardiac disorders affect the heart rhythm using HoloLens 2. Students are able to interact with realistic heart model and test their knowledge using flashcards and quiz component. The content presented in the app is created by the faculty to correlate with material covered in class. This project is an improvement of the first version of the application.
Cloak: Man-in-the-middle Observer
Cloak is a system used to capture packets going to and from a specific end user's computer. It can be used by Systems Administrators, privacy researchers, and penetration testers across use cases - ranging from monitoring device usage to gaining a foothold in a corporate network. It is unique in that it collects all data from a specified host device and transmits it to local or cloud-based infrastructure to be analyzed.
Composition Today v2
Our project aims to bring back life to the original Composition Today website which once served as an online hub for musicians to connect and explore opportunities. However, the original site is no longer maintained and has an outdated design. Last year a senior design team attempted to accomplish this goal; however, their end product did not meet musicians' needs. Our team focused heavily on interviewing the target audience and discovered that musicians want a space where they can browse and search jobs, competitions, concerts, and festivals. We will create a new website where musicians can go to view these opportunities, making us the first website to display all of these opportunity types in one place. Additionally, unlike the original site where one person manually posted all opportunities, our website allows users to easily create posts themselves. Our team will also focus on designing a more modern and less cluttered user interface. This project will restore and modernize a once beloved website that musicians used to find opportunities within the music industry.
Conductor Evaluation Through Motion Capture
This project aims to convert the College of Medicine's 2D browser based medical escape room game to a 3D VR interactive escape room. The VR application tests student's medical diagnosis knowledge in order to solve various puzzles and complete the escape room. The application will track various statistics including: completion time, responses to puzzle questions, and hints used. The statistics collected will be stored in a database so the College of Medicine instructors can view student's performance and gauge their knowledge of the material being tested within the escape room. This VR version is more immersive and interactive than the existing browser version. This project provides a basis for future projects to accomplish similar feats in other fields as well as open the door for other learning institutions to use similar teaching methods.
DroneVQA: Visual Inspection and Deep Learning
This simulation-based proof-of-concept deploys open-source visual question answering (VQA) artificial intelligence models on the camera feed of simulated drones to perform visual inspection of simulated environments. Drones are ideal candidates for performing visual inspection, as they are easily maneuverable, remotely relocatable, able to self-navigate, and allow for dynamic perspectives. The real-world applications of this technology are limitless, enabling autonomous complex surveillance, environmental monitoring, situational-analysis, self-inspection, and maintenance support. This project takes a research-based approach to compare the performance of open-source models among one another, as well as before and after they are fine-tuned for drone usage, by comparing key factors including answer accuracy, topic understanding, processing speed, and model training improvement. Using the DroneVQA desktop-based software tool, users can fly a drone around virtual environments and ask questions about what the drone’s camera sees. To demystify the functionality of VQA models, result visualizations displaying the exact image pixels or object-detection results that informed the model’s conclusion are provided along with the top answers. Users may also elect to simulate environmental weather effects such as rain, snow, dust, and fog, as well as camera defects including lens blur, pixel corruption, and disconnection, vital to evaluate self-inspection abilities and real-world model robustness.
GAEA Pulse Surveys - Space Force
We are attempting to improve the current evaluation system used by the Space Force by providing a platform for individuals to participate required evaluations, view real-time information about how they are being evaluated, and allow higher-ranking individuals to maintain who they are supervising.
Growth+: Math for Africa
Millions of children across Africa struggle with basic math. Yet they have increasing access to smartphones at home. Our vision is to tackle this need through a mobile application that empowers parents to provide their children with fundamental math skills. Our project, Growth+, is a mobile app designed to target numeracy, the ability to understand and use fundamental math skills, in Africa. While a couple educational apps have been developed in East Africa, our app will be the first to primarily target numeracy in Central and West Africa. Designed specifically for areas with little to no internet, our application works fully offline and uses limited storage on smartphones. Additionally, our app functions in three main languages: French, English, and Chadian Arabic, with options to add more local languages. Growth+ is designed for parents to use with their children. The parent portal provides reports on each child’s progress as well as subjects in which they are exceling and struggling. The children follow roadmaps to learn math skills and engage in friendly competition through a leaderboard. Our vision is a better future for African children through improved math skills, and this starts with parents and their children using Growth+.
Heterogeneous Swarm Formation for Search and Rescue in Forests
Our group is implementing a drone swarm that can coordinate a search and rescue operation for a lost person in a forest. Our project runs in Unreal Engine 4 and upon starting it will fly a couple groups of drones that will navigate a large area, ID the lost person actor, and use collision avoidance to avoid danger. The project is intended to be used by law enforcement for locating missing persons and criminal suspects who've ventured deep within the natural landscape. Law enforcement still relies on man power for conducting search and rescue, but our group proposes to leverage height and intelligent navigation of modern day drones in order to gain better coverage of an area and increase the success rate of missing person cases. Our drones will follow a spiral formation when searching the map while an overseer drone will scout ahead looking for possible areas of interest. If an area of interest was found then the overseer can assign nearby drones to the designated waypoint. The drones will then encircle the area looking trying to identify the target with an onboard Yolov5 object detector. Should the drones reach the end of the spiral and have yet to ID the target, they will redo the path while employing Bayesian methods to only search areas of higher likelihood.
Human Friendly Java Compiler
The current output of the Java compiler is very limited, it shows a single line containing the position that the compiler detected the problem, along with typically a single line of output that often fails to provide any information beyond what was incorrect about the code. Our project aims to make the output of the Java compiler significantly more informative and helpful. We are adding more context to errors to show the reason why an error happened and showing additional info in text by adding new structured sections to errors. Programmers of all skill levels will benefit from more information in errors.