4M is a framework for training multi-modal and multi-task models capable of handling various visual tasks and performing multi-modal conditional generation. The model demonstrates its generalizability and scalability through experimental analysis, laying the foundation for further exploration of multi-modal learning in vision and other domains.