When Pennsylvania teachers return to the classroom this fall, they will be subject to an overhauled and increasingly meticulous evaluation method.

Coined the "educator effectiveness system" by the state Department of Education, the regulations went into effect July 1 for the evaluations to be completed at the end of the 2013-14 school year.

The new measurements will place teachers into one of four new rating categories: distinguished, proficient, needs improvement and failing.

Previously, teachers would only be rated either satisfactory or unsatisfactory, and nearly 100 percent of teachers were receiving satisfactory ratings despite lagging test scores, state officials have said.

"There wasn't a lot of description," said Carolyn Dumaresq, deputy secretary with the Department of Education. "You had general topics, and if you got an 'unsatisfactory' on that, it may not have described to you why, or the actual behavior that you needed to exhibit."

And in the old evaluations, teachers were rated solely on classroom observations. Test scores were not used.

The new regulations use three additional components of differing

weight that include 22 sets of data.

Northern York County Superintendent Eric Eshbach said the changes "are a step in the right direction."

"The rubric that has been established provides an extremely comprehensive process that is fair to the teacher and effective in improving instruction as well as recognizing the outstanding teacher," Eshbach said.


Pennsylvania State Education Association spokesman Wythe Keever said the organization supported the language in the final version of the education bill creating the new assessment system.

"Now we're just trying to help our members understand it and comply with it," he said.

The new program additionally aims to provide tools for lower-rated teachers to improve.

"There was really no support system, there was no training, there was no staff development," Dumaresq said. "For each of the components, we are building free of charge for the school district, or for that professional, a professional development course that will help the teacher improve and give them the skills or the materials or the research."

A new formula: While classroom observation remains part of the evaluation and accounts for half of the new formula, the other half is based on student achievement: 15 percent for a "school performance profile," 15 percent for "individual teacher measure" and 20 percent for "elective data."

The school performance profile includes data such as PSSA test and SAT scores as well as graduation and attendance rates. Each school's performance profile will be released by the department in September.

"We've talked with over 4,000 people to say, if you wanted to be measured for how effective your building was in educational achievement and student achievement, what would you want to be measured on?" Dumaresq said. "And we gathered information and all these components."

The individual teacher measurements consist of data the department will begin compiling this year, but they will not be included in the evaluation until the end of the 2015-16 school year, by which point there will be three years' worth of data to compare.

Those data track student progress, but with an effort not to punish teachers whose students had already been lagging behind their classmates.

"Each year, we'll be giving to the teacher and the principal whether in fact the students grew a year's worth of achievement," Dumaresq said. "We know some children come to us ready to learn; some don't, and it takes us longer to move those students up to grade-level, and it's also important to make sure that we're at least moving them, at least we're growing them."

Eshbach said "there is still going to be uneasiness" about using students' grades to evaluate teachers.

"This uneasiness comes with the fact that we have never done it this way before," Eshbach said.

For the remaining 20 percent of the formula, elective data, schools have more discretion for what objectives to develop.

"It could be a particular group of students that you want to do better with, in growing the lower-achieving students to be more achieving students," Dumaresq said. "There could be a new curriculum that's being implemented, and (schools would ask) how well are you adjusting your teaching for delivering that new curriculum?"

The trial run: The new evaluation system has itself been evaluated thanks to a pilot program that began with four school districts in 2010-11 and expanded to around 200 districts in 2011-12 and around 300 in 2012-13.

Eshbach said principals in his school district helped pilot the program and had positive comments, as did teachers.

"The process prompted new and different conversations, even with our most veteran teachers," Eshbach said. "I received feedback from teachers who felt it provided them with ample opportunity for praise, but included detailed discussions with their principal about the intricacies of their work that could be improved."

One question that remained unanswered during the pilot program was how to evaluate teachers in subjects or grades that aren't measured by PSSA tests -- art teachers, physical education teachers, kindergarten teachers, etc.

For those teachers, the evaluation formula will still include 50 percent classroom observation and 15 percent school performance, while the weight of the elective data will increase to 35 percent to make up for the lack of "individual teacher measurement" data.

Although the new evaluations promise big changes for teachers and principals, Eshbach said students won't notice much, if any, of a difference.

"Students may notice principals in their classrooms conducting longer observations," Eshbach said, "but most of our students are used to the principal being in the classroom, doing formal and informal walk-throughs."