Policy for AI use in grant applications

Home Page Forums Info Policy for AI use in grant applications

Viewing 5 posts - 1 through 5 (of 5 total)
  • Author
    Posts
  • #15263
    Dana Boyd
    Member

    The Sarnoff Cardiovascular Research Foundation Board of Directors is holding a strategic planning meeting next month and one of the discussion items will be how to deal with detecting AI in our grant applications and if we should have a policy for applicants.

     

    Are there any HRA members who have addressed this?  Any information you can share would be greatly appreciated!

     

    Thank you!

    Dana 

    Dana Boyd

    Executive Director
    Sarnoff Cardiovascular Research Foundation 
    #15265
    The Arnold and Mabel Beckman Foundation allows the use of AI and implemented the following policy in the last group of submissions:
    Generative artificial intelligence tools, such as ChatGPT or AI, are evolving into essential software in the researcher’s toolkit. Just as graphing and statistical software aid in data analysis and presentation, AI tools can assist authors in their work. The Arnold and Mabel Beckman Foundation supports the use of these tools as supplementary resources when used in an ethical and responsible manner. Accountability lies with the human authors, who remain responsible for the proper application of AI and the critical review and reporting of its output. All awardees and applicants are expected to comply with best practice in research and publishing ethics, take full responsibility for any errors made by an AI tool, and are expected to cooperate with questions relating to the accuracy or integrity of any part of their work, including data analyses and representation.

    Sincerely,

     

    Catrina

    #15266
    Hi Dana,
    The Lipedema Foundation has just released our 2025 RFP, and we have implemented an AI policy.  We take the approach that if you use AI, you should be transparent about it. Take a look, and let me know if you have any questions.

    USE OF ARTIFICIAL INTELLIGENCE (AI)

    LF is open to the use of AI to enhance your work. However, we firmly believe that all professional review, critical thinking, and thoughtful analysis must be done without the use of AI.

    Applicant AI Use

    • Applicants may use AI tools (e.g., large language models) for this RFP. However, all use of AI, and details about how it was used, must be disclosed in the LOI and Full Application stages. This includes any images or text created by AI.

    • Example: GPT-4 was used to refine the specific aims and conclusions.

    The use of AI in the LOI and/or Full Application stages does not influence scoring.

    LF AI Use

    • Grants and applications are considered confidential. Therefore, LF will not analyze any submissions using any AI tool. We may use AI to summarize reviewer comments; however, individual grants will not be put into any AI models.

    • All submission reviews and readings will be done by LF staff and peer reviewers. Ultimately, the decision rests with LF for LOI follow-up calls and final funding decisions.

    Additional Information

    • AI is generally a public forum, and your ideas are not confidential if using AI. Please enter any proprietary or confidential information into AI/LLM tools with caution and in compliance with your institution’s privacy rules. It is recommended to turn off settings “sharing” or “adding to the model” to avoid disclosing confidential information.

    • The accuracy of submissions and legal ownership of Intellectual Property (IP) are the responsibility of the applicant. We are not responsible for funding someone else’s IP. Applicants ensure they own the rights to all IP they have disclosed in AI.

    Best regards,
     

    Laura Harmacek, PhD

    Program Officer, Scientific Programs

    Lipedema Foundation

    203-399-2444

    #15267

    Hello,

     

    Applicants for all programs (including pre-proposals) submitted to the American Heart Association are required in ProposalCentral to answer Y/N if they used a large language model (LLM) in their submission, and if Y, they need to disclose which section/s:

    The American Heart Association permits the use of a large language model (LLM – e.g. ChatGPT) or an artificial intelligence tool to generate and/or edit content in research proposals submitted for funding. This information must be disclosed at the time of submission. Disclosure of this information does not impact peer review. Should this information not be disclosed accurately, and use of these tools is identified, the proposal may be administratively withdrawn.

     

    Additionally, this is our guidance to peer reviewers:

    The American Heart Association DOES NOT permit the use of a large language model (LLM – e.g. ChatGPT) or an artificial intelligence tool to generate and/or edit content in peer review critiques. Uploading of any portion of a research proposal into a large language model (LLM – e.g. ChatGPT) or an artificial intelligence tool to assist in writing a critique of the proposal is explicitly prohibited as it is a violation of the AHA’s Peer Reviewer Certification Statement (to include confidentiality, non-disclosure, and conflict of interest). 

     

     

     

    Jessica Biddinger

    National Sr. Director, Research & Grants Administration

    American Heart Association

    #15268

    I sit on a couple of grant review committees that don’t allow the use of AI when reviewing grants. We also rejected a lot of applications whose citations that weren’t real.  A tech grant I reviewed required applicants not to use LLM AI to help write the grant because it might affect their intellectual property rights.

    I recommend letting applicants explain how they are using AI. Are they using it to review large sets of data like brain scans? What data was used to train the model? Or are they using LLMs to write the application?

     

    Anne Grego-Nagel, PhD

Viewing 5 posts - 1 through 5 (of 5 total)
  • You must be logged in to reply to this topic.