Major Histocompatibility Complex (MHC) has an important research value in treatment of complex human disease. Here, the BVLSTM-MHC model based on Bilateral and Variable Long Short-Term Memory (BVLSTM) was designed to overcome limitation of dependenting on the length of peptides. In other words, the same model can be used to predict different peptides with different length. We have trained four models for four species covering a total of 77 alleles, including 62 human alleles, three mouse alleles, ten Macaque alleles and two chimpanzee alleles.

Input peptide sequence
Enter a peptide sequence

Select a species and Allele(s)
Allele(s) or
Set output format
Output Value

All steps to use the model to predict peptide binding MHC class I are as follow:

  • To paste a peptide sequence into the text area, or to upload a file containing peptide sequences.
  • To select a species.
  • To select allele or to upload a file containing some alleles.
  • To select a output types including ic50 or score.
  • This dataset including four species subdatasets, including hunman, mouse, Macaque and chimpanzee. Every dataset contains three files, including training set, test set, validation set. In addition, human dataset also includes an independent test set. We also supply python code of our BVLSTM models. Please download datasets, python code and supplementary by clicking the links below:

    Human and Other species and BVLSTM and Supplementary

  • Jiang L, Yu H, Jiawei Li, Tang J, Guo Y, Guo F, Predicting MHC Class I Binder: Existing Approaches and a Novel Recurrent Neural Network Solution , Briefings in Bioinformatics, 2021
  • The Somatic motif tool was written in the combination of PHP, HTML, Javascript, R and Python. It contains tens of thousands lines of code. Thus it might contain bug or display incorrectly on certain browsers. This webpage has been tested on Microsoft Internet Explore 11, Microsoft Edge, Google chrome. It was not designed to be displayed on a mobile platform. We appreciate your comments to make our website more complete in the future, please contact:

    Yan Guo, Ph.D,
    Limin Jiang, M.S.,