Assessment 101

What is assessment?
What assessment is NOT
Why is assessment important?
What are the characteristics of effective assessment?
How do I know which assessment is right for my purposes?
What are the different types of assessment?
I’ve heard about portfolios – what are they and how should I use them?
How do I choose an appropriate instrument to measure my outcome?
How do I put my survey on-line?
How do I create a survey that uses Scantron? 
What is confidentiality and why is it important?
Who should I include in the assessment?
What’s a good response rate, and how do I get one? How can I avoid SSF (Student Survey Fatigue)?
How do I analyze my data?
What if I don’t believe my data?
I’ve got my data – I’m done, right?
Who should I tell about the results?

 

What is assessment? (Back to top of page)

  • Assessment tells us what happened:  utilization, satisfaction, efficiency, quality
  • Assessment documents an observation
  • Assessment also tells us the “so what?” – why is all of this important

What assessment is NOT (Back to top of page)

  • Program assessment does NOT evaluate the students
  • Program assessment does NOT evaluate the program coordinator or department head
  • It is NOT research.  Research proves; Assessment improves.
  • Since assessment is NOT research, you don’t need to know the start and end point – you just need to know the program achieves its goals of increasing the probability that students reach the desired outcomes
  • Assessment does NOT require a particular sample size; you don’t even have to assess every learning objective – if results can be generalized, then it is research
  • Assessment is NOT static – what works well with this year’s students may not work well next year

Why is assessment important? (Back to top of page)

  • Externally demanded, inspired by a weakening of the implied social contract regarding higher education
  • Institutionally demands
  • Internally driven to do our best work
  • Explains what we do, what we accomplished and what difference that makes in ways that other people who are not us can understand and remember

What are the characteristics of effective assessment? (Back to top of page)
From Keeling, Wall, Underhile & Dungy (2008) Assessment Reconsidered, International Center for Student Success and Institutional Accountability (ICSSIA)

  • Illustrates fidelity to institutional mission, vision & culture
  • Shows evidence of developmental pertinence (resonance/relevance)
  • Holistic
  • Linked to professional development activities for staff and faculty
  • Builds community capacity (everything is about learning – e.g. don’t even purchase a piece of exercise equipment without thinking through the learning objectives)

How do I know which assessment is right for my purposes? (Back to top of page)

  • Establish program objective(s):
    • Who, specifically, is the population included in the assessment?
    • What are the expectations (behavior) for this population?
    • Under what conditions (circumstances)?
    • To what degree (how well?  what level?)
  • What is going to tell you whether or not your program was successful?  Measure that.
  • Be sure you are measuring #2 and not what is easiest to measure. 
  • Try to get different perspectives on #2.    Any one perspective will give you an image of your program; three perspectives provide better accuracy (remember that a GPS requires 3 satellites to pin point your location).
  • If you’re measuring some construct that other programs are looking at you may not need to create your own assessment – look at standardized instruments.

What are the different types of assessment? (Back to top of page)

  • Utilization:  How many people were served?
  • Satisfaction:  Also known as “formative assessment” – how well did your participants like the program?
  • Efficiency:  Could the end have been achieved with fewer resources (effort, money, etc)?
  • Quality:  Also known as “summative assessment” - Did your program do what you intended it to do?  This can be measured in various ways: 
    • Qualitative
      • Interviews
      • Focus groups
      • Portfolios
      • Observations
    • Quantitative
      • Surveys/Standardized instruments
      • Clickers
      • Quizzes/performance assessments
      • Data such as retention/graduation
      • Longitudinal assessments (assess not just at the end of the program, but follow-up to ensure results endured)

I’ve heard about portfolios – what are they and how should I use them? (Back to top of page)

  • Catherine Palomba defines portfolios as a type of performance assessment in which students’ work is systematically collected and carefully reviewed for evidence of learning and development.
  • When done right, a portfolio tells a longitudinal story of learning, encourages students to take responsibility for their own learning, provides insight into the student’s metacognition (thinking about thinking), and invites interaction and dialogue.
  • Be mindful that a portfolio may not be a true representation of what the student knows or can do, especially if motivation for collecting/selecting material for inclusion is low.
  • While the mechanics of portfolio assessment are beyond the scope of this tutorial, they generally include some required and some student-selected artifacts which are evaluated using a standardized rubric.  Having more than one evaluator generally provides a more objective review.
  • For more on portfolios, see  Assessment Essentials: Planning, Implementing, and Improving Assessment in Higher Education, Catherine A. Palomba, Trudy W. Banta (1999) Jossey-Bass Publishers, San Francisco.

How do I choose an appropriate instrument to measure my outcome? (Back to top of page)

  • Google Scholar:  http://scholar.google.com/
  • International Center for Student Success and Institutional Accountability:  http://www.icssia.org
  • APA Guide to Finding Information about Psychological Tests:  http://www.apa.org/science/faq-findtests.html
  • ASSESS (search through Google, operated through U of Kentucky) Listserv
  • Assessment of the First Year Experience Listserv
  • Other college & university’s websites (e.g. www.jmu.edu/assessment)
  • Companies specializing in assessment in higher education such as Educational Benchmarking, Inc:  http://www.webebi.com/.
  • Make use of the university’s administration of NSSE (National Survey of Student Engagement:  http://www.nsse.iub.edu/index.cfm), employer survey, and alumni survey (these rotate on an annual basis so that each are done every three years)
  • Look for instruments published with their articles in ERIC, PsychINFO, PsychLIT, Health and Psychosocial Instruments (HAPI)
  • Tests in Microfiche
  • Mental Measurements Yearbook:  http://www.unl.edu/buros
  • Tests:  A comprehensive reference for assessment in psychology, education & business (1986).
  • Evans, Forney & Guido-Dubrierno:  Student development in college theories & section on assessment
  • Measuring self-concept across the life span:  Issues and instrumentation

NOTE:  If you use someone else’s instrument you need to ask their permission.  If the test is copyrighted you need their permission AND there will likely be a fee.  It is unethical to borrow items from someone else’s test without permission.

How do I put my survey on-line? (Back to top of page)

  • First ask yourself if this is the best way to administer your survey.  On-line surveys may be efficient and convenient, but if you have the opportunity to survey a captive audience immediately following a program you will get a better response rate and perhaps more meaningful data. 
  • There are several free on-line tools that will allow you to publish and distribute surveys electronically (Google free surveys).  Perhaps the most well known is Surveymonkey.com.
  • The Center for Research in Educational Policy (CREP) on campus also has an on-line survey tool and can help you compile data for a fee.
  •  The Mid-South Survey Research Center, also on campus, can assist you with on-line or telephone surveys (including multi-lingual surveys) and data analysis for a fee.

How do I create a survey that uses Scantron? (Back to top of page)

  • Contact the Helpdesk (X8888).  IF (and only if) they tell you that the university doesn’t do survey scanning, contact Scott Beck at 3864 or sbeck@memphis.edu.
  • CREP and the Mid-South Survey Research Center can help with this as well for a fee.

What is confidentiality and why is it important? (Back to top of page)

  • Confidentiality is the guarantee of anonymity of the respondent and their identifying responses
  • Confidential responses allow respondents to be more honest; there is less pressure to respond in a socially desirable manner
  • Assessment should clearly state whether responses will be kept confidential or not
  • Sensitive data that includes names needs to be kept in a secure place
  • Data should be reported in aggregate or in a way that does not identify individuals unless you have their permission

Who should I include in the assessment? (Back to top of page)

  • Everyone in my program? 
    • Great if you have a captive audience
    • Gives you a broader spectrum of data
  • Random sample of participants?
    • Must generalize from a smaller group to the whole
    • Must also ensure you have a representative sample.  This is one of the biggest problems with assessment – you tend to hear from two groups of people:  those that really like the program and those that have an axe to grind.  A sample should include students representative of any characteristic that might make a difference in the way they respond.  For instance, if males and females are likely to have different perceptions of your program, make sure you include both males and females in your sample.
    • May be more efficient for large groups as it makes data more manageable, utilizes fewer resources (e.g. incentives)
  • Broader audience –campus-wide (perhaps coordinating with other departments’ assessments)?
    • Helpful if you need a control group – can compare participants with non-participants
    • Can collaborate with other programs seeking assessment data for greater efficiency
  • Other constituents?  (Grad students, RSO Advisors, Faculty, Parents, Alumni, etc)  NOTE:  Every year post-grad alum’s image of their alma matter improves (from NASPA March presentation on ICSSIA Assessment Instrument Review session)
    • Any one perspective will give you an image of your program; three perspectives provide better accuracy (remember that a GPS requires 3 satellites to pin point your location)
    • Also generates buy-in/ownership for your program

What’s a good response rate, and how do I get one? (Back to top of page)

  • Gather data from a large enough sample that you are able to generalize your results to the entire group.  Individuals who respond to surveys may be significantly different from those who do not; therefore surveys with lower response rates may be less representative than those with higher response rates, although new research questions this assumption.  While it is debatable whether you can put a number on the lowest acceptable response rate, any assessment including data from less than 20% of its selected responders (whether that includes all the participants or a selected sample) should be interpreted cautiously.  The most important thing to consider is whether the responses you got are representative of the entire group (or are they the extremes that either loved or hated the program and wanted to tell you about it).
  • Strategies for increasing response rates include quick on-the-spot surveys, incorporating the assessment into the program (e.g. clickers), and incentives.
  • Careful not to exchange a good response rate for a loss of confidentiality.
  • Incentives should never be so large as to be unreasonably coercive.   

How can I avoid SSF (Student Survey Fatigue)? (Back to top of page)

  • Surveys are great, convenient ways of getting assessment data, but there are so many requests for us to provide our feedback that we’ve all become fairly apathetic.  So be creative – think of ways OTHER than using Survey Monkey to get student feedback.  You might use a paper copy while you’ve got a “captive audience” (a quick quarter-page survey at the end of a one-time program works well).  Doing a Power Point Presentation?  Try using the Student Affairs-owned set of (30) clickers to get anonymous feedback on the spot!  Think about collaborating with other departments and administer one survey.  You can combine resources to offer incentives for returning one survey rather than several, and you can also get comparative data (how do students who did or did not attend Frosh Camp or do or do not live on campus respond differently to questions?).  Have a more extensive program?  Consider a focus group or interviews to get more in-depth feedback.  Face-to-face feedback can increase the likelihood for socially desirable answers though, so you need to craft your questions, and your group, carefully.  

How do I analyze my data? (Back to top of page)

  • This partly depends on how you collected your data.  If you used a tool like Survey Monkey it will do quite a bit of analysis for you.  If you hand-entered or scanned in data you can put it in an Excel file for simple calculations.  You can also import an Excel table into a statistical program such as SPSS.  If you’re not familiar with these programs please contact Dan Bureau (dabureau@memphis.edu) for assistance. 

What if I don’t believe my data? (Back to top of page)

  • Were there problems in the methodology?  Perhaps you had vague terms or ambiguously constructed questions that led to different interpretations of the same question.  Here are some other sources of bias in assessment:
  • Hawthorne Effect:  Participants change their behavior as a result of being assessed, not as a result of your program.  In other words, they might commit to leadership or community service or exercise, etc, because they know they are being “watched”.
  • Social desirability:  Students respond in the way they think you want them to respond.
  • The order in which you ask questions can affect responses.  Be careful not to persuade participants to respond in a certain manner by asking loaded questions or a series of questions that leads to a likely response.
  • Low response rate:  Students who respond to surveys tend to do so because they have something to say – they either loved the program or have a bone to pick.  Getting only these extremes will not allow you to extrapolate your findings to your population.  You need to work at making sure you have a representative sample, however new research has found that increasing the response rate alone often does not solve this problem – whether you have a 20% or 80% response rate, those last 20% are likely to be those nonresponsive students that would change your results.
  • People tend to agree with survey items, so if all your questions were positively phrased (did you like…) you’re more likely to get positive results, and vice-versa.    

I’ve got my data – I’m done, right? (Back to top of page)

  • Data is only valuable if it is utilized.
  • How will you change your program based on your assessment?
  • How will you use your assessment from last year in your planning for next year?

Who should I tell about the results? (Back to top of page)

  • It’s always important to include assessment results in your annual report.  Beyond that, consider sharing results with stakeholders in your program, including students!