Project number
26047
Organization
Universal Avionics
Offering
ENGR498-F2025-S2026
Air Traffic Control (ATC) and pilot radio communications are essential for flight safety and operational efficiency. For testing avionics based on AI technology that supports aviation communication, there is a need for realistic, generative conversations between ATC and pilots that include not only the textual dialogue but also nuanced details like accent, emotion, and ambient noise.
This project will involve:
1. Reviewing authentic ATC/pilot communication recordings to understand common phrasing, call structures, and communication styles.
2. Annotating raw human audio recordings with metadata specifying accents, emotional tone (e.g., calm, stressed, urgent), and background noise characteristics (e.g., cockpit chatter, static, airport sounds).
3. Designing an AI-powered generative system capable of creating realistic ATC/pilot conversations that simulate believable real-world flight scenarios.
4. Developing methods to ensure variability and authenticity in generated conversations, including simulating diverse global accents and environmental conditions.
5. Building a prototype software tool that can generate and output both text transcripts and synthetic audio based on the annotated metadata.
6. Testing and validating the realism of generated conversations against expert evaluations (e.g., aviation professionals).
This project will involve:
1. Reviewing authentic ATC/pilot communication recordings to understand common phrasing, call structures, and communication styles.
2. Annotating raw human audio recordings with metadata specifying accents, emotional tone (e.g., calm, stressed, urgent), and background noise characteristics (e.g., cockpit chatter, static, airport sounds).
3. Designing an AI-powered generative system capable of creating realistic ATC/pilot conversations that simulate believable real-world flight scenarios.
4. Developing methods to ensure variability and authenticity in generated conversations, including simulating diverse global accents and environmental conditions.
5. Building a prototype software tool that can generate and output both text transcripts and synthetic audio based on the annotated metadata.
6. Testing and validating the realism of generated conversations against expert evaluations (e.g., aviation professionals).