Computational models designed to automate and monitor interactions between humans and robots that provide humans and robots with an appropriate mental model of how the others (humans and robots) will react to various behaviors, data quality, instructions and environmental changes are of extreme importance for collaborative robotics, referred here as “co-botics”. The planned research will focus on the application of advanced machine learning and pattern recognition methodologies for facilitating shared intelligent cooperation between robotic units and humans. Advanced multi-modal (or otherwise called multiview) data analysis aiming at describing cues from the real world (including humans) from multiple information sources will be developed and applied to this end. Based on that technology, online visual information analysis will be combined with sensor data analysis for decision making that will be interpreted in the entire system as suggestion-based cooperation through shared intelligent interactions. The project will continue towards enhancing the performance of multi-modal visual/sensor data analysis methods for efficient robot-human interaction in efficient scheduling applications. Moreover, it will focus on creating data visualizations that combine information coming from various types of sources (visual, depth, audio) in order to provide insights on the way robots perceive their environment. We believe that such visualizations will allow us a better understand of how to enhance the overall operation and increase intelligence of robotic units in the targeted scenarios.