AI, Robotics & Neuroengineering

  • Professor Marcia O’Malley’s research addresses issues that arise when humans physically interact with robotic systems, with a focus on training and rehabilitation in virtual environments. The main goal of this research is to develop and demonstrate an adaptive training algorithm based on the display of artificial force cues within a simulated environment. These cues, displayed via an arm exoskeleton haptic feedback device, will convey additional information to the trainee beyond the physical laws that govern the simulated environment, such as desired trajectories within the environment, desired exploration speeds, and suitable interaction forces during task completion. The adaptive training algorithm will tune itself based on the individual's performance.
  • Professor Kaiyu Hang is broadly interested in robotic systems that can physically interact with other robots, people, and the world. By developing algorithms in optimization, learning, and control, his research is focused on efficient, robust, and generalizable manipulation systems ranging from small scale grasping and in-hand manipulation, to large scale dual-arm mobile manipulation, and multi-robot manipulation.
  • Professor Lydia Kavraki is the director of the Ken Kennedy Institute at Rice University. She is a member of the National Academy of Medicine. Her research interests span robotics, AI, and biomedicine. In robotics and AI, she is interested in enabling robots to work with people and in support of people. Her research develops the underlying methodologies for achieving this goal: algorithms for motion planning for high-dimensional systems with kinematic and dynamic constraints, integrated frameworks for reasoning under sensing and control uncertainty, novel methods for learning and for using experiences, and ways to instruct robots at a high level and collaborate with them. Kavraki Lab is inspired by a variety of applications: from robots that will assist people in their homes, to robots that would build space habitats. In biomedicine she develops computational methods and tools to model protein structure and function, understand biomolecular interactions, aid the process of medicinal drug discovery, analyze the molecular machinery of the cell, and help integrate biological and biomedical data for improving human health.
  • Professor Vaibhav Unhelkar develops new technology, in the form of algorithmic advances and interactive systems, to help intelligent machines such as robots and decision support systems reason about, learn from, and interact with humans. By merging expertise from AI, robotics, and human factors engineering, Professor Unhelkar has developed algorithms to enable fluent human-robot interaction and deployed collaborative robots among humans. In his on-going research, he is developing computational techniques to combine data and human expertise (i.e., for human-in-the-loop AI) and to improve transparency of intelligent machines.
  • Professor Behnaam Aazhang’s research interests include signal and data processing, information theory, dynamical systems, and their applications to neuroengineering with focus areas in (i) understanding neuronal circuits connectivity and the impact of learning on connectivity (ii) developing minimally invasive and non-invasive real-time closed-loop stimulation of neuronal systems to mitigate disorders such as epilepsy, Parkinson, depression, obesity, and mild traumatic brain injury (iii) developing a patient-specific multisite wireless monitoring and pacing system with temporal and spatial precision to restore the healthy function of a diseased heart, (iv) developing algorithms to detect, predict, and prevent security breaches in cloud computing and storage systems.
  • Professor Genevera Allen is the Founder and Faculty Director of the Rice Center for Transforming Data to Knowledge, informally called the Rice D2K Lab. Her research focuses on developing statistical machine learning tools to help scientists make reproducible data-driven discoveries. Her work lies in the areas of interpretable machine learning, optimization, data integration, modern multivariate analysis, and graphical models with applications in neuroscience and bioinformatics.
  • Professor Richard Baraniuk is a fellow of the American Academy of Arts and Sciences. His research interests in signal processing and machine learning lie primarily in new theory and algorithms involving low-dimensional models. His research on theory of deep learning, compressive sensing, multiscale natural image modeling using wavelet-domain hidden Markov models, and time-frequency analysis has been funded by NSF, DARPA, ONR, AFOSR, AFRL, ARO, IARPA, DOE, NGA, EPA, NATO, the Texas Instruments Leadership University Program, and several companies. He is also one of the founders of the Open Education movement that promotes the use of free and open-source-licensed Open Educational Resources (OER). Currently, Dr. Baraniuk is developing advanced machine learning algorithms for the personalized learning system OpenStax Tutor that integrates text, video, simulations, problems, feedback hints, and tutoring and optimizes each student's learning experience based on their background, context, and learning goals.
  • Professor Lan Luan's research focuses on the development of multimodal neural interfaces that combine the state-of-art electrical, optical and other technologies to monitor and manipulate brain activity. The application of these neurotechnology advancements enables the fundamental investigation of neurological disorders and the development of novel therapies. By developing and applying a more complete arsenal of novel tools, she hopes to provide a revolutionary multifaceted picture of the brain in health and in disease, and to seek new ways to better diagnose, treat, cure, and even prevent brain disorders.
  • Professor Ankit Patel is pursuing the unification of traditional hierarchical machine learning with deep neural networks, with applications to a variety of fields, including neuroscience, robotics, and particle physics. In this vein, he is designing and building the first generative convolutional net, which promises to enable (1) the training of sophisticated deep vision models from large quantities of unlabeled data, and (2) the execution of top-down inference for tasks in which fine-scale information is important (e.g. segmentation, pose estimation). He is also working with visual neuroscientists to build a bridge between machine learning models and real neural networks, using the latter to make testable predictions about the former. And finally, he is working with physicists at the Large Hadron Collider to build efficient new algorithms to separate signal from noise, in search of New Physics beyond our best generative model of the Universe thus far -- the Standard Model of Particle Physics.
  • Professor Xaq Pitkow's primary focus is on developing theories of the computational functions of neural networks, especially how they compute properties of the world using ambiguous sensory evidence – an interdisciplinary research that draws on neuroscience, physics, and machine learning. Professor Pitkow applies these general concepts primarily to sensory systems, especially vision. His results include explanations of how our brain can unblur our vision even as we constantly move our eyes, how visual signals are optimized for the capacity of our optic nerve and how the structure of our cortex is matched to the structure of natural images.
  • Professor Jacob Robinson develops nanotechnologies to monitor and control specific cells in the nervous system, with the goal of helping reveal fundamental principles of neural function and advance the treatment of neurological disorders. Directions of his research include miniature wireless bioelectronics, nanomagnetic neural control, nanophotonics and computational imaging for neural sensing, as well as discovering neural circuits and control systems in millimeter-sized organisms.
  • Professor Chong Xie’s Xie Laboratory is primarily interested in applying specially designed functional devices to solve key challenges in fundamental and clinical neuroscience. The general goal is to realize seamless integration of man-made electronics with the nervous system and to help us better understand, interact with, and augment to the living systems. Recently, the Xie laboratory has focused on developing a scalable, tissue-integrated electrical neural interface composed of ultraflexible nanoelectrionic threads (NETs), which promotes reliable, glial scar-free integration with the brain tissues and enables reliable chronic recording.
  • Professor Caleb Kemere’s research interests include realtime neural engineering, interacting with memory, deep brain stimulation, neural interface technologies, and open source tools. In one project, Dr Kemere’s team is seeking to develop systems that translate ongoing neural activity into information and use this to manipulate the hippocampal circuit in real-time. One potential application is  to build systems that will, for example, allow us to selectively inhibit the recall or long-term storage of traumatic episodes.
  • Professor Ashok Veeraraghavan’s research areas include Computational Imaging, compressive sensing for imaging, signal processing and computer vision. Data Science, and Neuroengineering. He is co-developer of FlatCam, a thin sensor chip with a mask that replaces lenses in a traditional camera. Making it practical are the sophisticated computer algorithms that process what the sensor detects and converts the sensor measurements into images and videos. FlatCams may find use in security or disaster-relief applications and as flexible, foldable, wearable cameras, and even disposable cameras. His team has also developed FlatScope, a flat microscope and software system that can decode and trigger neurons on the surface of the brain.
  • Professor Simon Fischer-Baum’s lab takes a problem-centered focus to research questions of the representations and processes that underlie cognition. He combines a wide range of experimental methods – computational modeling, behavioral studies and brain-imaging techniques (e.g., fMRI, ERP/EEGs, tDCS) – and study a variety of populations, all with the goal of understanding human capacities for language and memory. Current projects include the understanding of the written language, the representing of sequences, and the temporal dynamics of cognition.
  • Professor Randi Martin is the Elma Schneider Professor of Psychology. Her research interest lies in Cognitive and Affective Neuroscience, including Psychology and Neuropsychology of Language, Short-Term Memory/Working Memory, and Whole brain network connectivity & relation to cognition. With funding from the NICHD, her team researched different types of short-term memory loss and its impact on word learning and sentence comprehension. Her team uses neuroimaging (fMRI) to study language processing in individuals who have experienced brain damage or injury as well as in healthy individuals.
  • Professors Fathi Ghorbel, Pedram Hassanzadeh, Fred Higgs, Laura Schaefer, Tayfun Tezduyar, and Geoff Wehmeyer promote the cause of Energy & the Environment by using theoretical analysis, numerical modeling, and physics-based artificial intelligence to enhance the electrical power generation efficiency, improve renewable energy including photovoltaic systems and wind turbines, and develop climate prediction models.