2021 Scientific Integrity Survey Frequent Questions
FQs
- Why did you conduct this survey?
- How will the data be used?
- Is this the first time EPA employees were surveyed about scientific integrity?
- What is the response rate for this survey?
- Was the survey confidential or anonymous?
- What types of questions were asked on the survey?
- How was this survey developed?
- Are these data accurate? What kind of testing have you done?
- What is a frequency distribution table?
- How are the data analyzed?
- Why do the questions have a different number of respondents?
- What are the next steps for the analysis?
- Why did this survey only cover the past two years (2019-2020)?
- Who can I contact with other questions?
The Scientific Integrity Program annually evaluates its goals and objectives and regularly distributes a scientific integrity survey to Agency employees as part of our process of continuous improvement. Specifically, the survey results identify successes and challenges to the implementation of EPA’s Scientific Integrity Policy.
This data will be used to identify opportunities to enhance the culture of Scientific Integrity at EPA. In addition, the survey results will be used to inform updates to EPA’s Scientific Integrity Policy.
This is the third employee survey since EPA’s Scientific Integrity Policy was implemented in 2012. In addition to the 2021 survey, the Scientific Integrity Program conducted its first survey of EPA employees in 2016. The EPA Office of the Inspector General surveyed the Agency on scientific integrity in 2018.
The response rate for this survey is 18.1 percent. This means that 18.1 percent of those who received the survey answered questions and submitted their responses. Many more employees began taking the survey but did not submit their final survey. Since they did not submit their responses, they are not included here. Of those who began the survey, 59.3 percent submitted their final survey.
This survey was both confidential and anonymous. For the initial data collection, a personalized email was sent to each federal employee with an anonymous personalized link, inviting their participation in the survey. The Qualtrics* program disassociated the email address from each respondent, ensuring that survey responses could not be linked to an individual employee. To ensure confidentiality in the analysis, any Program Office, Regional Office, other EPA organizational unit, or demographic category with less than 20 respondents will not be included in any analysis.
Fifteen questions were asked on a 4-point Likert scale ranging with response options from strongly agree to strongly disagree, very satisfied to very dissatisfied, excellent to poor, extremely familiar to unfamiliar, and very comfortable to not at all comfortable. There were 24 yes and no questions, two ranking-style questions that asked for prioritizing responses and selecting the top five choices and 24 open-ended questions.
Questions were divided across 10 primary themes: participant overview, manager experiences/perceptions, familiarity with policy, culture of scientific integrity, leadership, procedures and experiences with reporting lapses, knowledge and experiences related to misconduct, review and release of scientific information and media, barriers and suggestions for improvement, and demographics. Click here to view the survey instrument.
The survey instrument was designed by the Scientific Integrity Program to address key issues pertaining to implementation of EPA’s Scientific Integrity Policy. In developing the 2021 survey, we closely examined the 2016 Scientific Integrity Program and 2018 OIG survey instruments. We included both direct and modified versions of some of the 2016 and 2018 questions. We also added new questions to better understand and assess additional aspects of scientific integrity and how we can improve our culture. In alignment with best survey development practices, the draft survey was pre-tested among nine experts within the Scientific Integrity Committee and pilot tested among six non-EPA experts. These steps were important for assessing validity of the content as well as reducing potential measurement error. After revision, the survey product was shared for feedback, finalized, and approved for internal distribution.
To ensure the validity of the data and reduce survey error, this survey included both pre-testing and pilot testing. In the survey design, skip logic techniques were imbedded throughout. Skip logic is a feature that determines what question a respondent might see based on their response to a previous question. Skip logic allowed us to personalize the survey such that respondents could only answer questions that were applicable to them based on their organization and/or position (e.g. only those who identified as managers were directed to the manager questions). Further, to assist in identifying the presence of response bias, scoping questions were included that would solicit the same response to a question when asked a different way.
A frequency distribution table is a way to organize survey data and contains the total number of observations corresponding to each question, represents the categories of responses, and displays the total number of respondents included.
Simple summaries referred to as descriptive statistics were calculated for all survey items using the statistical software, IBM SPSS Statistics 27*. Frequency statistics are displayed that describe what the data show and display the summaries for each of the Likert scale and demographic questions. Ranking-style questions were treated as individual items in which frequencies were imported into Excel for calculating weighted averages using the ranking method.
Throughout this survey, skip logic was implemented for certain questions to allow us to target certain parts of the Agency with relevant questions. For example, some questions were about experience as a manager so only those who self-reported that they were managers were able to access those questions. Additionally, not every question was required to be answered so some participants chose to skip certain questions. As a result, the questions throughout the survey have different numbers of respondents.
This preliminary analysis narrowly focused on basic frequency analyses across all of survey questions, therefore, a more in-depth understanding of the data is necessary to assess patterns. Additional analyses will be conducted and released on this website upon completion.
Since 2016, EPA has conducted a survey on scientific integrity approximately every two years. The last survey was conducted by the Office of Inspector General in 2018 based on participant's experiences in the six months preceding the survey. In alignment with efforts of the previous surveys, the 2021 survey was narrowed to a two-year recall period and consisted of questions and response items aimed at gauging employees’ awareness and understanding of the Policy and their experiences regarding the culture of Scientific Integrity at the Agency.
Please contact the Scientific Integrity team with any questions.
*Disclaimer
Mention of or referral to commercial products or services, and/or links to non-EPA sites does not imply official EPA endorsement of or responsibility for the opinions, ideas, data, or products presented at those locations, or guarantee the validity of the information provided.