iCORE logo

iCORE is a multidisciplinary research group at Texas A&M University - Corpus Christi (TAMUCC). With a core focus on computation, we are involved in a broad spectrum of projects in robotics, coastal & atmospheric science, smart cities & agriculture, and much more. There are many opportunities to get involved with research as an undegraduate or graduate student. We encourage cross-discipline collaborations, and our members have backgrounds in computer science, atmospheric science, engineering, and others. iCORE can be thought of as a hub connecting students whose research has a significant computational element such as machine learning for fog prediction, energy-efficient path planning for marine robots, and simulation & visualization of floods. Students are encouraged to get involved with iCORE even if they are already working in another lab on campus. iCORE has opportunities to enhance, rather than replace, their existing research through networking, collaborations, workshops, etc.

  • Large variety of vehicle platforms, sensors, cameras, single-board computers, and high-end GPUs available for projects.
  • External collaborators include Lonestar UAS, NASA, AI2ES, and Agrilife.
  • Involved with TAMUCC's Research Experience for Undergraduates program.
  • Student-led training and workshops on topics such as Linux, git, ROS, gazebo, etc.
  • A supportive audience to get presentation feedback before conferences, dissertation defense, etc.
  • Social events include beach cook-outs, potlucks, and game nights.

Path planning diagram
Robots need to go to a destination from their current position. Despite decades of research in path planning, there are challenges because of the complex missions. We work on multi-objective planning for autonomous boats that considers the influence of wind and currents, variable water level, and shorelines. These objectives make the solution space so large that classical planning algorithms are simply too slow. We have considerable experience applying metaheuristic algorithms like Genetic Algorithm and Particle Swarm Optimization. These approaches sacrifice a guaranteed optimal solution, but with huge speed improvements.

FogNet diagram
Complex machine learning architectures are increasingly used to develop high-performance models. However, it is very difficult to understand how these models work. What relationships in the training data is it relying on to make decisions? There are many examples where a seemingly-powerful model had learned to exploit spurious relationships that were unrealistic in the real world. We are working with the NSF AI Institute for Research on Trustworthy AI in Weather, Climate, and Coastal Oceanography (AI2ES) to develop XAI techniques and strategies to explain complex weather forecasting models. We are currently investigating a 3D CNN used for fog prediction, working with a forecaster from the National Weather Service to develop XAI techniques that will help to understand and improve the model. In addition to high-dimensional inputs, the explainations themselves are high-dimensional. This makes them difficult to visualize and interpret. We are developing techniques to help end-users extract meaningful insights from large sets of high-dimensional XAI output. This includes XAI aggregation techniques as well as interactive visualization tools.

Abstract: 3D reconstruction is a beneficial technique to generate 3D geometry of scenes or objects for various applications such as computer graphics, industrial construction, and civil engineering. There are several techniques to obtain the 3D geometry of an object. Close-range photogrammetry is an inexpensive, accessible approach to obtaining high-quality object reconstruction. However, state-of-the-art software systems need a stationary scene or a controlled environment (often a turntable setup with a black background), which can be a limiting factor for object scanning. This work presents a method that reduces the need for a controlled environment and allows the capture of multiple objects with independent motion. We achieve this by creating a preprocessing pipeline that uses deep learning to transform a complex scene from an uncontrolled environment into multiple stationary scenes with a black background that is then fed into existing software systems for reconstruction. Our pipeline achieves this by using deep learning models to detect and track objects through the scene. The detection and tracking pipeline uses semantic-based detection and tracking and supports using available pretrained or custom networks. We develop a correction mechanism to overcome some detection and tracking shortcomings, namely, object-reidentification and multiple detections of the same object. We show detection and tracking are effective techniques to address scenes with multiple motion systems and that objects can be reconstructed with limited or no knowledge of the camera or the environment.
  • Sept. 23 | Fall 2022 Open House: Showcasing ICORE's opportunities for students and faculty to get involved. (flyer)
  • Oct. 14 | Linux Workshop: An introduction to working with the Linux command line
  • Nov. 11 | Introduction to Explainable AI (XAI): Learn about XAI: techniques, successes, and how to recognize and mitigate major pitfalls
  • TBD | Data Analytics:: An workshop on data analytics techniques