Brian Gin, MD, PhD

Title(s)Associate Professor, Pediatrics
SchoolSchool of Medicine
Address550 16th. Street, #4503
San Francisco CA 94158
Phone--
ORCID ORCID Icon0000-0001-7655-3750 Additional info
vCardDownload vCard

    Collapse Biography 
    Collapse Education and Training
    University of California, BerkeleyPh.D.2009Chemistry
    University of California, San FranciscoM.D.2011 Medicine
    University of California, San Francisco2014Residency, Pediatrics
    University of California, San Francisco2016Fellowship, Pediatric Hospital Medicine
    University of California, BerkeleyM.A.2016Education

    Collapse Overview 
    Collapse Overview
    Patients-Trainees-Supervisors: clinical learning's triad of trust

    In health professions education, relationships between patients, trainees, and supervisors may not always coalesce spontaneously or effectively. Within this triad, a web of uncertainties and motivations makes each party vulnerable to the other. The mitigation of uncertainty, alignment of motivation, and acceptance of vulnerability all require trust. Patients seeking care may distrust their providers, but they must at least hold some trust in their providers’ ability to match them with the care they need. Trainees are motivated not only by direct care of their patients, but also by their own learning and development. Alignment of patient care with learning thus requires all parties’ acceptance of the vulnerability inherent in receiving and delivering care that is also crafted as an opportunity for learning.
    We seek to understand how trainees build and receive trust within this dynamic, and how this process can be improved from both a learning perspective and patient outcomes perspective. Within HPE, a recent focus on trust emerged around the trainee-supervisor dyad because of its relevance to the framework of entrustment in assessment. While a trainee’s opportunities for learning may depend on the trust their supervisor has in them, it depends upon their patients’ trust in them as well. Patient-provider trust has also been a topic of strong interest, so we would like to understand how these two dyads of trust interface, with a particular focus on learning and patient care outcomes.


    Blurring qualitative and quantitative approaches: investigating clinical trust with AI language models

    Advances in artificial intelligence (AI) and natural language processing (NLP) promise new insights in the analysis of narrative data. Computational and algorithmic developments in deep learning neural networks have led to the advent of language models (LMs - including large language models, or LLMs) with unprecedented abilities to not only encode the meaning of words, sentences, and entire bodies of text, but also to use such encodings to generate new text. Within health professions education research, these LMs promise to bridge, and potentially blur, the distinction between qualitative analysis and quantitative measurement approaches.
    Our research focuses on exploring the interpersonal dynamics of clinical entrustment by developing LM methodologies to augment the reach of traditional qualitative methods for analyzing large narrative datasets. We have employed LMs to assist narrative analysis in several ways, including: 1) to discover qualitative themes and measure associated constructs in unlabeled narratives, and 2) to uncover latent constructs underlying the classification of pre-labeled narratives. In 1), querying how supervisors and trainees may differentially approach entrustment decisions, we developed a transfer learning strategy (applying LLMs trained on datasets apart from the study dataset) to identify and measure constructs in a large dataset of feedback narratives. The constructs algorithmically identified included features of clinical task performance and sentiment characterizing the language used in the feedback. The LLMs provided consistent measurement of these constructs across the entire dataset, enabling statistical analysis of differences in how supervisors and trainees reflect on entrustment decisions and respond to potential sources of bias. In 2), querying how entrustment decisions shape feedback, we trained an LM from scratch to predict entrustment ratings from feedback narratives, using a training set consisting of narratives paired with entrustment ratings. By deconstructing the trained LM, we uncovered latent constructs the LM used to make its predictions. Such constructs included the narrative’s level of detail and the degree to which the feedback was reinforcing versus constructive.
    While LMs offer the advantages of consistent construct measurement and applicability to large datasets, they also carry the disadvantages of algorithmic bias and lack of transparency. In 1), we identified gender bias in the LLM we had trained to measure sentiment; this bias originated from its training dataset. To mitigate this bias, we developed a strategy that masked the LLM to gender-identifying words during both training and measurement of sentiment. This allowed us to identify small but significant biases in the study data itself, which revealed that entrustment ratings appeared to be less susceptible to bias than the language used to convey it. With respect to transparency, while our work in 2) enabled the deconstruction of an LM designed for a specific task (i.e. prediction of entrustment), larger LLMs used in generative AI (including GPT-4 and LLaMA) currently lack the ability to trace their output to its sources. Our ongoing work focuses on developing LLM-based strategies that support transparency in narrative analysis, and on developing theory to characterize the epistemology and limitations of knowledge both represented within and derived from LLMs.

    Collapse Bibliographic 
    Collapse Publications
    Publications listed below are automatically derived from MEDLINE/PubMed and other sources, which might result in incorrect or missing publications. Researchers can login to make corrections and additions, or contact us for help. to make corrections and additions.
    Newest   |   Oldest   |   Most Cited   |   Most Discussed   |   Timeline   |   Field Summary   |   Plain Text
    Altmetrics Details PMC Citations indicate the number of times the publication was cited by articles in PubMed Central, and the Altmetric score represents citations in news articles and social media. (Note that publications are often cited in additional ways that are not shown here.) Fields are based on how the National Library of Medicine (NLM) classifies the publication's journal and might not represent the specific topic of the publication. Translation tags are based on the publication type and the MeSH terms NLM assigns to the publication. Some publications (especially newer ones and publications not in PubMed) might not yet be assigned Field or Translation tags.) Click a Field or Translation tag to filter the publications.
    1. Assessing supervisor versus trainee viewpoints of entrustment through cognitive and affective lenses: an artificial intelligence investigation of bias in feedback. Adv Health Sci Educ Theory Pract. 2024 Feb 23. Gin BC, Ten Cate O, O'Sullivan PS, Boscardin C. PMID: 38388855.
      View in: PubMed   Mentions:    Fields:    
    2. ChatGPT and Generative Artificial Intelligence for Medical Education: Potential Impact and Opportunity. Acad Med. 2024 Jan 01; 99(1):22-27. Boscardin CK, Gin B, Golde PB, Hauer KE. PMID: 37651677.
      View in: PubMed   Mentions: 4     Fields:    Translation:Humans
    3. The fundamentals of Artificial Intelligence in medical education research: AMEE Guide No. 156. Med Teach. 2023 06; 45(6):565-573. Tolsgaard MG, Pusic MV, Sebok-Syer SS, Gin B, Svendsen MB, Syer MD, Brydges R, Cuddy MM, Boscardin CK. PMID: 36862064.
      View in: PubMed   Mentions: 9     Fields:    Translation:Humans
    4. Evolving natural language processing towards a subjectivist inductive paradigm. Med Educ. 2023 05; 57(5):384-387. Gin BC. PMID: 36739578.
      View in: PubMed   Mentions:    Fields:    Translation:Humans
    5. Comprehensive Assessment of Clinical Learning Environments to Drive Improvement: Lessons Learned from a Pilot Program. Teach Learn Med. 2023 Oct-Dec; 35(5):565-576. Bernal J, Cresalia N, Fuller J, Gin B, Laves E, Lupton K, Malkina A, Marmor A, Wheeler D, Williams M, van Schaik S. PMID: 36001491.
      View in: PubMed   Mentions:    Fields:    Translation:Humans
    6. Exploring how feedback reflects entrustment decisions using artificial intelligence. Med Educ. 2022 Mar; 56(3):303-311. Gin BC, Ten Cate O, O'Sullivan PS, Hauer KE, Boscardin C. PMID: 34773415.
      View in: PubMed   Mentions: 1     Fields:    Translation:Humans
    7. How supervisor trust affects early residents' learning and patient care: A qualitative study. Perspect Med Educ. 2021 12; 10(6):327-333. Gin BC, Tsoi S, Sheu L, Hauer KE. PMID: 34297348; PMCID: PMC8633204.
      View in: PubMed   Mentions: 3     Fields:    Translation:Humans
    8. A Dyadic IRT Model. Psychometrika. 2020 09; 85(3):815-836. Gin B, Sim N, Skrondal A, Rabe-Hesketh S. PMID: 32856271.
      View in: PubMed   Mentions: 1     Fields:    Translation:Humans
    9. The limited role of nonnative contacts in the folding pathways of a lattice protein. J Mol Biol. 2009 Oct 09; 392(5):1303-14. Gin BC, Garrahan JP, Geissler PL. PMID: 19576901.
      View in: PubMed   Mentions: 22     Fields:    Translation:Cells
    Brian's Networks
    Concepts (31)
    Derived automatically from this person's publications.
    _
    Co-Authors (13)
    People in Profiles who have published with this person.
    _
    Similar People (60)
    People who share similar concepts with this person.
    _
    Same Department
    Search Department
    _