Search Articles

View query in Help articles search

Search Results (1 to 10 of 825 Results)

Download search results: CSV END BibTex RIS


Application of the Bidirectional Encoder Representations from Transformers Model for Predicting the Abbreviated Injury Scale in Patients with Trauma: Algorithm Development and Validation Study

Application of the Bidirectional Encoder Representations from Transformers Model for Predicting the Abbreviated Injury Scale in Patients with Trauma: Algorithm Development and Validation Study

The loss during fine-tuning is defined as: where yi is the true category index of the ith sample, and y^i,yi is the probability that the ith sample predicted by the model belongs to category yi. After a careful hyperparameter search, we determined the optimal model configuration: an 8-layer BERT architecture including an input layer, six 384-unit hidden layers, and an output layer, which together form the encoder-decoder transformer components with 5 transformer blocks.

Jun Tang, Yang Li, Keyu Luo, Jiangyuan Lai, Xiang Yin, Dongdong Wu

JMIR Form Res 2025;9:e67311