Saturday, October 27, 2012

Who are Mods 5 & 6 for?

Q:  Districts are interested in contracting with us to train teachers in Educator Evaluation System modules 5 (Gathering Evidence) and 6 (Observations and Feedback).  Five seems to be a great fit for teachers, but six seems designed for evaluators.  Will teachers benefit from attending training in module 6?

A:  Module 6 is more geared toward evaluators since they are the individual responsible for conducting observations, but the content in Module 6 will be beneficial to all educators as it represents a significant shift from current practice.  Teachers tend to equate observations with full-length classroom period observations with pre- and post-conferences, during which extensive notes might be taken according to a detailed rubric. The shift to frequent, unannounced observations will feel just as new to a teacher as it will to an evaluator, and it will benefit districts to make sure that everyone is on the same page and holds the same expectations when it comes to the role of observation in the new framework.  DESE encourages districts to send both evaluators and teachers to Module 6, and it’s a district-level choice.

IPDPs and Evaluation System

Q:  How does the Individual Professional Development Plan (IPDP) process relate to the new educator evaluation system?

A:  Regarding IPDPs, the revised licensure regulations allow for educators to use activities in their educator plans to contribute to the IPDPs, and vice versa. Educators and evaluators are encouraged to align the two when possible. That said, an evaluation in no way affects one’s ability to renew their license.

Limits to Teacher Ratings?

Q:   Is there a percentage-based limit to the number of teachers that can be rated exemplary-- any truth to this? (or needs improvement, for that matter)

A:  No—there is no percentage-based limit associated with any performance category.  The regulations place no numerical targets or requirements on the number of educators in each rating category.

Monday, September 17, 2012

New Guidelines Aug 2012

Q: Is DESE updating and changing things as the new system is implemented across the state?  

A: YES.  On Tuesday, August 28th, Commissioner Chester issued new guidelines for Ed Eval trainings.  Here they are with a summary of implications CES work as an approved vendor.

The new guidelines are sent to you in an email - do take a look at them.
Here are the implications for CES:
  • Additional RTTT districts may look to CES for training because they are:
    • Required to begin training before beginning to evaluate.
    • Required to publish a training schedule by 10/1/12.
  • More districts may seek training for their evaluators, as DESE has now defined the minimum required training as:
    • 4 hours for teachers, to be delivered by district leaders*
    • 11 hours for evaluators, to be delivered by vendors or district leaders**
  • We will not deliver Modules 7 and 8 this year because:
    • Module 7 will be online only
    • Module 8 will be released Summer, 2013
  • It appears that we will not need to tackle the new one-hour workshops as they are to be delivered by district leaders.
* This consists of four new, one-hour modules, to be released between 9/4 and 10/1.
** This consists of existing modules 1-6.

Monday, September 10, 2012

What's it look like?

Q:  What does proficient performance "look like"?  What exactly would you expect a teacher to be doing?  To what extent DESE will define what things might "look like?"'

A:  DESE will provide no additional guidance around using the rubrics to define proficiency. That work will have to be done between evaluator and educator.

More Than One Rubric

Q:  Special educators in Hadley asked if they could get the Special Instructional Support Personnel (SISP) rubric during training?

A:  Yes, let's get this to them (or get any subgroup their particular rubric) so that they can work with that during training.  Click here to link to where the rubrics are posted.

How much evidence? II

Q:  How much evidence is needed for a single element?

A:  Module 5 has information about this.  In general, evidence should be provided for each indicator (not for each element) and that a product like a model curriculum unit may be sufficient evidence to demonstrate proficiency in multiple different indicators.

Example of Goal

Q:  What is an example of a "student learning goal"? 

A:  Check out Module 4 materials.  This is addressed and will be addressed in this mod.

Tuesday, August 28, 2012

Multiple Measures in PPT

Q: In the Module 1 PowerPoint, slide 30, why do "multiple measures of student learning" appear in the top left, as evidence to be used when rating performance, instead of only appearing at the bottom of the slide, as evidence of impact?

A: We are going to be sending an email to a variety of audiences in the next day or two that will clarify the role of “multiple measures of student learning” in the evaluation process. Simply put, since all educators are required to set a student learning goal (in addition to a professional practice goal) as part of their Summative Rating, they will automatically be using various measures of student learning to gauge progress toward that student learning goal. Hence the placement of “multiple measures” in the top left of Slide 30.  The Impact Rating is another place where multiple measures will inform the evaluation, and regulations specify that these must include district-determined measures and MCAS where applicable.

-Claire Abbott, DESE

Narrowed Focus

Q:  One district asks: We will focus on ten strategically selected indicators in this first year.  Even we have narrowed our focus in this way, would teachers still be expected to gather and provide evidence of their performance in all 16 indicators?

A:  Selecting a group of Indicators to focus on is an excellent way to approach the evaluation within a district. That said, as you imply, educators are still expected to gather evidence of their performance in all 16 indicators. However the preponderance of evidence can and should rest with those 10 “high priority” indicators. Module 5, which was just released, talks a bit about how individual artifacts and/or observations often provide evidence across multiple indicators, which is there to reduce the perceived “burden” of having to produce individual artifacts for each and every Indicator.

-Claire Abbott, DESE 

Wednesday, August 22, 2012

MTA Prez Supports Ed Eval System

You might consider using Paul Toner's words in your next Educator Evaluation training:

“This framework incorporates many of MTA’s recommendations and, if properly implemented, will lead to better evaluations and improved teaching, learning and leadership in our schools,” said MTA President Paul Toner.                          From MTA's website

Friday, August 17, 2012

How Many? 1st Year


Q: Is it the case that districts must provide a summative performance rating for only 50% of their educators by the end of 2012-13?

A: Yes. During the first year of implementation, districts are only required to evaluate 50% of their educators. This “phase-in” approach is in the regulations here: “A district may phase in implementation of its new evaluation system over a two-year period, with at least half of its educators being evaluated under the new system in the first year.” Some districts are choosing to take advantage of this option. Those that are taking this approach are doing so in a variety of ways: a lottery, starting with teachers only in Year 1, starting with elementary only in Year 1, etc. As long as they evaluate 50% of their educators during Year 1, they meet the minimum requirement.

-Claire Abbott, DESE 

Mid-cycle Conference and Performance Ratings


Q: A local superintendent asks, "If the mid-cycle conference between evaluator and educator is to be truly formative, no performance ratings should be conferred. Are evaluators required to arrive at ratings at the mid-cycle meeting?"

A: Your local superintendent asks an important question. Yes—the mid-cycle conference is “formative” and intended to be a point where evaluators and educators touch base, check progress on goals, and make any mid-course adjustments to a plan if necessary. That said, there’s an important technical distinction between the formative assessment and the formative evaluation that relates to ratings:

-A formative assessment occurs mid-way through the cycle for educators on plans that are one year or less in length. No ratings are required for a formative assessment.

-A formative evaluation occurs mid-way through the cycle for educators on 2-year plans, so presumably, this would take place in May or June. Ratings are required for a formative evaluation, but they default to the educator’s prior Summative Rating unless there is evidence suggesting a significant change in practice by the educator (in which case an evaluator could actually issue new performance ratings and change the educator’s plan). The default rating is designed to alleviate the burden on evaluators from having derive a rating for every educator on a yearly basis. (The reason behind the rating requirement for formative evaluations is that in order to meet the federal RTTT parameters, states had to commit to yearly educator evaluations.) Let me know if you have any more questions about this distinction between the formative assessment and formative evaluation.

-Claire Abbott, DESE

Numbers

Q: How do I know how many people I will be working with?

A: As assignments are given, Damon will provide group size information as he has it.  Don't be afraid to ask him or your school / district contact if you don't know.

Partner Communication

Q: How do I know who my facilitation partner is and get in touch with him / her / them?

A: Damon is coordinating the pairing and team building.  Once folks get assignments, he will provide contact information, get out of the way, and be ready to provide support as needed.

Materials

Q: What is the process for getting all the needed materials for the facilitation?

A: Handouts and other pertinent hard copy materials will be generated by CES and provided to participants in a binder.  Chart paper, markers, post-its, tape, etc. will also be provided by CES.  Facilitators will be responsible for creating any supplemental materials they decide to include OR that were recommended during the 'train the trainer' sessions (i.e. agenda and intended outcomes on chart paper).  Lastly, facilitators will cooperate with CES staff to pick up or make sure that the needed materials get to the site.

Audience?

Q: The materials provided by DESE (facilitator guide, PowerPoint, handouts) seem to be designed for school leaders, is that the audience?

A: The modules provided by DESE are indeed primarily targeted for school / district leaders, not the educators themselves.  The idea is that the leaders would in turn present the model to educators using the materials.  While CES is using the framework with educators (in large groups) in a couple sites early on, going forward the work will have the school / district leaders as the targeted audience.

Room Set Up

Q: How do facilitators communicate room set up to school or district contacts who are contracting with CES for this work?

A: Once you get an "assignment" to facilitate the modules, CES staff (OK, Damon) will provide contact information to you so one person from a facilitation team can communicate directly.

Breaks?

Q: Where do we fit in breaks as facilitators?

A: Breaks need to happen for optimal learning.  In the DESE created materials, they are not built in.  It will be up to discretion of CES facilitators to include breaks in the work.


Thursday, August 16, 2012

Contact at DESE

Here's Samantha Warburton's email address: SWarburton@doe.mass.edu
Apparently she's on vacation till 8/22; here's the email address for her colleague: "Abbott, Claire" <CAbbott@doe.mass.edu>.

Tuesday, August 14, 2012

How much evidence?

Q:  Will teachers need to collect evidence about their performance for each of the 33 elements included in the teacher evaluation rubric?

A: There is no requirement for the number of Elements on which educators OR evaluators are required to gather evidence. We suggest that educators and evaluators share the responsibility of gathering evidence and ensure that there is some evidence for each Indicator, although the preponderance of evidence is likely to be in the areas of most focus. Finally, evaluators will need to have gathered or have access to sufficient evidence to meaningfully inform his/her professional judgment to determine a rating for each Standard.

-Samantha Warburton, DESE