GitHub - microsoft/GenerativeImage2Text: GIT: A Generative Image-to-text Transformer for Vision and Language (original) (raw)

Introduction

This repo presents some example codes to reproduce some results inGIT: A Generative Image-to-text Transformer for Vision and Language.

Installation

Inference

single image, captioning

AZFUSE_TSV_USE_FUSE=1 python -m generativeimage2text.inference -p "{'type': 'test_git_inference_single_image', \
'image_path': 'aux_data/images/1.jpg', \
'model_name': 'GIT_BASE', \
'prefix': '', \
}"

single image, question answering

AZFUSE_TSV_USE_FUSE=1 python -m generativeimage2text.inference -p "{'type': 'test_git_inference_single_image', \
'image_path': 'aux_data/images/1.jpg', \
'model_name': 'GIT_BASE_VQAv2', \
'prefix': 'what is it?', \
}"

multiple images, captioning

AZFUSE_TSV_USE_FUSE=1 python -m generativeimage2text.inference -p "{'type': 'test_git_inference_single_image', \
'image_path': ['aux_data/images/1.jpg', 'aux_data/images/1.jpg', 'aux_data/images/1.jpg', 'aux_data/images/1.jpg', 'aux_data/images/1.jpg', 'aux_data/images/1.jpg'], \
'model_name': 'GIT_BASE_VATEX', \
'prefix': '', \
}"

multiple images, question answering

AZFUSE_TSV_USE_FUSE=1 python -m generativeimage2text.inference -p "{'type': 'test_git_inference_single_image', \
'image_path': ['aux_data/images/1.jpg', 'aux_data/images/1.jpg', 'aux_data/images/1.jpg', 'aux_data/images/1.jpg', 'aux_data/images/1.jpg', 'aux_data/images/1.jpg'], \
'model_name': 'GIT_BASE_MSRVTT_QA', \
'prefix': 'what is it?', \
}"

Training

The repo shows the key code path of constructing the network input with transformations and forward/backward. The code can be plugged into any trainer easily. Here is the example for the base model.

python -m generativeimage2text.train -p "{'type': 'forward_backward_example', \  
                'image_files': ['aux_data/images/1.jpg', 'aux_data/images/2.jpg'], \  
                'captions': ['a couple of boats in a large body of water.', 'a view of a mountain with a tree'], \  
            }"  
python -m generativeimage2text.train -p "{'type': 'forward_backward_example', \  
                'image_files': ['aux_data/images/1.jpg', 'aux_data/images/2.jpg'], \  
                'prefixs': ['what is this?', 'how many trees?'], \  
                'captions': ['several boats in a large body of water', '1'], \  
            }"  

ImageNet

Class ID to unique readable names

Citation

Please consider to cite the following reference if it helps.

@article{wang2022git,
  title={GIT: A Generative Image-to-text Transformer for Vision and Language},
  author={Wang, Jianfeng and Yang, Zhengyuan and Hu, Xiaowei and Li, Linjie and Lin, Kevin and Gan, Zhe and Liu, Zicheng and Liu, Ce and Wang, Lijuan},
  journal={arXiv preprint arXiv:2205.14100},
  year={2022}
}

Misc

The model is now available in 🤗 Transformers. You can also find a fine-tuning guide on image captioning with GIT here. Thanks to Niels Rogge for contributing the model to 🤗 Transformers and Sayak Paul for the fine-tuning guide.

Acknowledgement

Part of the code is based ontransformers,clip,maskrcnn-benchmark,oscar,virtex.

Contributing

This project welcomes contributions and suggestions. Most contributions require you to agree to a Contributor License Agreement (CLA) declaring that you have the right to, and actually do, grant us the rights to use your contribution. For details, visit https://cla.opensource.microsoft.com.

When you submit a pull request, a CLA bot will automatically determine whether you need to provide a CLA and decorate the PR appropriately (e.g., status check, comment). Simply follow the instructions provided by the bot. You will only need to do this once across all repos using our CLA.

This project has adopted the Microsoft Open Source Code of Conduct. For more information see the Code of Conduct FAQ or contact opencode@microsoft.com with any additional questions or comments.

Trademarks

This project may contain trademarks or logos for projects, products, or services. Authorized use of Microsoft trademarks or logos is subject to and must followMicrosoft's Trademark & Brand Guidelines. Use of Microsoft trademarks or logos in modified versions of this project must not cause confusion or imply Microsoft sponsorship. Any use of third-party trademarks or logos are subject to those third-party's policies.