Recently, a research team from the University of Washington released a new visual tracking model called SAMURAI. This model is based on the Segment Anything Model2 (SAM2) and aims to address the challenges encountered during visual object tracking in complex scenes, particularly when dealing with fast-moving and self-occluding objects. While SAM2 excels in object segmentation tasks, it has some limitations in visual tracking. For instance, its fixed-window memory approach fails to consider objects in crowded scenes.
Recently, a study from the Swiss École Polytechnique Fédérale de Lausanne (EPFL) made its debut at the IEEE International Conference on Robotics and Automation in Rotterdam. This research aims to explore how robotic hands can break existing limitations to grasp more objects. The research team noted that deep learning models significantly enhanced the dexterous manipulation capabilities of multi-fingered hands, but the grasping guided by contact information in cluttered environments has not yet been fully explored. To address this issue, the researchers designed an inspired hand that can bend backward to pick up various objects and detach itself to climb.