VideogradVis is an advanced package that encompasses cutting-edge methods for Explainable AI (XAI) in the realm of computer vision. This versatile tool is designed to facilitate the diagnosis of model predictions, serving a dual purpose in both production environments and during the model development phase. Beyond its practical applications, the project also serves as a comprehensive benchmark, offering a robust foundation for researchers to explore and evaluate novel explainability methods.
Compatibility Assurance: Rigorously tested on a multitude of Common Convolutional Neural Networks (CNNs).
Versatile Applications: Unleash the full potential of VideogradVis across a spectrum of advanced use cases. From Classification and Object Detection to Semantic Segmentation, Embedding-similarity, and beyond, this suite empowers your projects with unparalleled flexibility and performance.
python setup.py install
import torch
from VideogradVis.Vis import GradVis
odel = torch.hub.load('facebookresearch/pytorchvideo', 'x3d_m', pretrained=True)
model.blocks[5].proj = nn.Linear(in_features = 2048, out_features = args.classes, bias = True)
print(f"Loading weights: {args.checkpoint}.")
model.load_state_dict(torch.load(args.checkpoint))
vis = GradVis(model)
model_input = torch.rand((1,3,120,250,250)).requires_grad_(True) #(B,C,T,H,W)
vis.compute_grad(input_tensor=input_model, path='/home/osama/pytorch-video/output/')
This document provides visualizations explaining the model's predictions on whether a headlight is ON or OFF. Each figure offers insights into the gradients and features that influenced the model's decision-making process.
An innovative multimodal fusion system created that combines camera and LiDAR data to significantly enhance the accuracy of 3D perception.
Learn MoreAn advanced package that encompasses cutting-edge methods for Explainable AI (XAI) in the realm of computer vision.
Learn MoreFor product demonstration or collaboration, please contact us here.
For all other questions, please contact us here.