Brainstorming Session on Assessment Models
for the Discovery-enriched Curriculum

The Questions

Question 1: How can we assess student learning in a Discovery-enriched Curriculum (DEC) with a focus on student achievement of learning outcomes?

Question 2: How can we develop a systematic approach to the summary assessment of teaching and learning that satisfies the needs of our multiple stakeholders?

photo1Background Information (extracted and adapted from comments by Prof. Brian Coppola)
Three Schools of Thought on Assessment:

Assessment as Education Research. Based on a theoretical framework, it is then used to inform instructional goals. This results in new instructional design, a multi-method approach to data collection, reliability and validity of assessment, and thus meaningful knowledge.

Outcome-driven Quantitative Assessment. This is a pragmatic approach to measuring learning accomplishments that is aligned with key performance indicators provided by external stakeholders. These assessments have obvious external value, but may not provide the feedback needed by those in the classroom.

Program Assessment as Feedback and Ratings. This may be an atheoretical assessment of learning multiple-point scales, with single-item measures to assess complex constructs (i.e., one question to ask is "Did you learn a great deal?"). These measures have pragmatic purpose and diagnostic value, but are also subject to political interpretation.



Brainstorming Feedback

Question 1. How can we assess student learning in a Discovery-enriched Curriculum (DEC) with a focus on student achievement of learning outcomes?


  1. Final Year Projects (FYP)
  2. Identifying Knowledge gaps
  3. Asking original questions
  4. Real problems in the course
  5. Research components in courses


  1. Patents
  2. Copyrights
  3. Papers
  4. Proposals (Reports)
  5. Exhibitions
  6. Creative works
  7. New software applications
  8. Games
  9. Business plans
  10. Feedback from employers


1. Binary or not, i.e. is it or isn't it a discovery?

(a) Can be large or small (don't get caught in Nobel chase)
(b) Most discoveries are not broadly known

2. Assessment=counting

(a) CityU, by such and such criteria, had X% of this
(b) Here are some examples
(c) Here is how we did it

3. Inside formal classes

Syllabus – here is the knowledge gap we aim to fill
Finding balance of faculty and student responsibilities

Projects (FYP)
Concrete outcomes like papers, creative works

Discovery process

Directed study
Final year projects
How to assess "discovery" in a big class?
To assess discovery, assess the process not just the artifact
How to quantify?
Weighting? Worth of self-assessment?
Summer internships
Attitude toward "discovery"
Should "discovery" be tangible?
Objectivity about the "discovery" process?

Non-discovery: we don't want this

Not asking questions
Conforming rather than seeking something new

What is discovery?

Discovery = research/scholarship?
Discover: knowledge, skills, talents, goals
Can there be discovery within discovery?
Would assessing discovery change from discipline to discipline?
Process template = good method
What are the relationship between DEC and OBTL?
How is DEC different from OBTL?
Discovery = unknown to known
New to the world vs. new to the person: we mean the former, new to the world
Is discovery same as research?
Experimental discovery
What to discover in the study of English? Different definitions?
New methodology? Discover good process – new interpretation?
How is assessing knowledge different from assessing discovery and innovation?
Failure is success? Edison’s many failed attempts at finding a suitable light bulb filament
New data
Measuring creativity- new (original), practical (can be used for ... __) ,having value
Impact of discovery can take a long time to appreciate
Unintended, unexpected consequences


Should discovery be assessed in different stages?
What is new and novel about a set of experiments?
American Idol-type presentations
Can examinations measure discovery? How?
1-minute assessment evaluations
Formative vs. summative assessments
Types of feedback
How to assess differences between group projects and individual projects?

Goals and skills/Competencies

Open-mindedness, being able to access feasibility
Gaining something that is valued
Promotion of Individual skills
Learning to appreciate what you know
Determining the basic knowledge required to discover something
Competency in communication of data
Associating experiences with a pivotal event

Source of discovery

"Skunkworks", where students can make original discoveries
Problem-finding process
Critical literacies
Interdisciplinarity and use of multiple sources
Make discoveries from unusual sources


Student self-assessments and feedback
Students keep an online log – process and thought
Different expert input required for assessment on discovery?
What about assessing the community impact of students’ projects?
Collective intelligence
Products, assess students' creations/projects and determine teachers' viewpoint
Other feedback sources – project stakeholders and their reports
Tools:student viewpoint - portfolios with self-reflective features
For group project, how to differentiate the different contributions?
Public assessment
Mentoring system
Peer assessment
Should discovery be assessed by people from industry, government, NGOs?


Question 2. How can we develop a systematic approach to the summary assessment of teaching and learning that satisfies the needs of our multiple stakeholders?

Peer Review

Expert observer such as retired overseas Professor
External assessor
Not every course every year, i.e. every 3 years
Mentoring, constructive feedback
Sharing session
Teaching Excellence Award for each College/School?

Teaching Excellence Award(TEA)

Winners to share their experience with
- 20 minutes presentation
- Poster presentation


Who is a good instructor? Qualities?
Good teachers can be found in courses of any size


Find those successful teachers : what do they score highest on? 5 questions maximum
Peer review – peer from same discipline or related one to minimize potential conflicts of interest
Who will use the measures?
UGC: use of new technology in learning
UGC: how learning culture is encouraged at CityU
UGC: how teaching and research are linked together
An online system would make life easier from an administrative perspective
UGC: internationalization in students’ life
Mechanism – Shouldn't take more than 1% staff time or ~ 20 hours per year?
Easier mechanism is to focus on outliers,i.e., especially strong and weak
Student assessment can be a problem because of time conflicts with other courses
A scoring/rating system will make it easier to make administrative decisions
Experts could put evaluation data from departments into a common format
What are mechanisms for improvement?
Feedback on student’s assignments
How much time does the system take? Should it take from administrative prospective?

Shared opinion/informed assessment

Let students see each another’s comments and LIKE and DISLIKE comments (Facebook)
Pedagogical content knowledge – effective connection with subject + learning goals
UGC: benchmark with other universities
Collect evaluation data starting in week 1?
Link TFQ answers to attendance records?
Allow weighting of evaluation questions?
Sensitivity and knowledge about student learning
Can we provide additional information from the staff?
Is there alignment across goals/methods/outcomes?
Are staff's expectations clearly transmitted?


Are teaching evaluations a popularity contest?
"Undercover" assessment appropriate?
Peer review? How to do it?
“Mystery shoppers” observations (fair or not?)
Should students review be compulsory? In terms of TFQs?
Incentives – not just money for good teaching (staff)
Not just high stakes and one winner; include multiple kinds of rewards to recognize many staff
Evaluation in connection to coaching in inter-disciplinary groups
How will assessments be used in evaluating staff?
What weighting to give to TFQs?
Peer review – peer from other discipline
Use of alumni observers of classes and EAAs

Multiple methods of assessment

1-minute student feedback
TEQ results
From alumni
From employers
UGC: accountability to tax payer
Peer evaluation by a colleague at the same grade, but from a different department
Ask general grade (=office staff) to handle the evalulation logistics
Direct classroom observations
Self-assessment (teacher fills out his/her own questionnaire)
Multiple dimensions to instruction: performance= effective communication, organization
Ways of evaluating - questionnaire, self-reflective portfolio, observations by others, teacher/student advising/mentoring relationships


Delayed assessment
Measure students’ improvement from before until after the course
Post-graduation feedbacks after 1 year, 5 years, 10 years
How can excellent teaching be rewarded compared to, say, research?
Do students care about learning?


Talking to teachers before taking course?
Help in choosing a course? Perhaps a committee should be in place
Permission to sit in instructor’s class before registration
Course manuals should be provided as to expectations of students
Early warnings of disasters + swat team to recover
Apprenticeship involving one teacher to one student
Support to young faculty on teaching/research, etc. (UGC)
Staff performance is stable, so look for changes
What to do with “bad” teachers?
Clear goals and assessment criteria, with weightings
Reward for long-term committment to self-development
How staff meet the employers – learn what is useful for the students
Town hall- style meetings to ensure direct access to grassroots ideas