For robots to successfully take part in bi-directional social interactions with people, they must be capable of recognizing and responding to human affect. Such robots would promote effective and engaging interactions with users. In this thesis, a multimodal bi-directional affect architecture is proposed that determines user affect using a unique combination of user body language and vocal intonation. In addition to one-on-one human-robot interactions, multi-robot teams can provide valuable assistance in Urban Search and Rescue (USAR) missions by exploring dangerous environments, while searching for victims. One of the main challenges an operator may face in controlling a multi-robot team is the simultaneous control of multiple robots while juggling between tasks and keeping situational awareness. This thesis also proposes a multi-robot collaboration architecture using a learning-based semi-autonomous controller that provides the ability to effectively allocate sub-tasks to robots to complete USAR missions.