How exciting – the first half of the 2012 national standards results have been released. Just as we expected, Hekia Parata tried to use these to prove the success of National in raising achievement. Empty words, based on a pile of effluent.
The same will apply to the release of school based results in coming weeks, especially when the bases for comparison are the 2011 results. These are the same results that John Key described, in a moment of rare honesty, as ‘ropey data.’ Indeed.
As little changed in the way the data was collected, from 2011 to 2012, there is no way to determine if the 2012 data is less ropey, as ropey, or more ropey, than 2011. So, we have ropey data from 2011 being used as a benchmark to compare the ropey data from 2012. How many different definitions of ropey do we need?
Let’s look at it from another way and assume that the 2011 and 2012 data sets are reasonably reliable. The so-called improvement hovered about the 1% mark, which we can expect would fall into the margin of error, had the data been analysed in a statistically valid way. No such analysis was done, so the margin of error is unknown, a point that Kim Hill extracted from the Minister on Morning Report. Totally valueless exercise.
But wait, there’s more. For children aged between 6 to 8 years, results are based at the end of the first, second and third years’ of schooling, falling on the anniversary of starting school (generally their 6th, 7th and 8th birthdays).
As the actual assessment dates are spread throughout the year, there is little common ground for aggregating the data for comparative purposes.
From Year 4 onwards, judgements are based on children’s year levels at the end of the school year. The age range in each year level can be as much as 11 months between the oldest and youngest and again this creates huge difficulties when the data is aggregated.
The rankings based on year levels means, for example that the year 5 cohort in 2012 is different from the year 5 cohort in 2011 (i.e., a completely different set of children), and so it is not valid to compare year level results. Just to further invalidate this, no allowance is made for roll turnover in schools – one school reports that 32% of the 2012 school roll were not at the school in 2011. We sure aren’t comparing like with like.
Even comparing year 5 results in 2011 with year 6 results in 2012 is fraught with issues – for a start different teachers are likely to have been involved, and the standards will have changed.
We can’t even be sure that teachers’ judgements were comparable – were the 2012 judgements less critical, as critical, or more critical than in 2011? Without intensive school, district and national moderation, it is impossible to know the answer.
We have no evidence that Campbell’s Law,
“The more any quantitative social indicator is used for social decision-making, the more subject it will be to corruption pressures and the more apt it will be to distort and corrupt the social processes it is intended to monitor,”
hasn’t influenced results as schools became aware of the potentially high stakes of the results, such as school league tables.
It is transparently clear to anyone not blinded by ideology or ignorance that it is not valid to draw conclusions from national standards data. This will never change, not even with the introduction of the PaCT database that I have previously discussed.
The recent very welcome decision by principal professional groups (not unions) and the NZEI (the teachers union) to boycott the use of PaCT in schools will save teachers from this very onerous and questionable task, especially considering recent evidence that confirms that PaCT will also link teachers to children’s results. Why?
What are national standards anyway? After all, the notion of ensuring all children achieve in reading, writing and arithmetic sounds good. What’s wrong with this?
On the surface, nothing. Of course all children need these skills, that goes without saying and no one has ever said otherwise. But that’s not the whole story, not by a long shot.
Let’s do some unpicking.
The government has this arbitrary target of having all students pass NCEA Level 2 (equivalent to University Entrance), claiming that this will be the solution to inequality in New Zealand through enhancing this country’s international competitiveness. Naturally this spin has been swallowed by all and sundry, even it is patently rubbish, and links back to the neoliberal ideology of Milton Friedman and the Chicago School of Economics.
Where is the evidence to support this claim that NCEA Level 2 is the panacea? Have you ever seen it? Further, where is the evidence that shows that the particular standards deemed to be worthy of NCEA Level 2 are the prerequisite ones to achieve the government’s goal?
This isn’t solely a New Zealand issue – the same question is being raised in the USA about the ‘Common Core Standards’ that have much in common with national standards.
This US article Are College and Career Skills Really the Same?’ examines the rhetoric that common core standards are necessary to prepare children for employment and tertiary studies.
‘The second concern is justifying the Common Core on the highly dubious notion that college and career skills are the same. On its face, the idea is absurd. After all, do chefs, policemen, welders, hotel managers, professional baseball players and health technicians all require college skills for their careers? Do college students all require learning occupational skills in a wide array of careers? In making the “same skills” claim, proponents are really saying that college skills are necessary for all careers and not that large numbers of career skills are necessary for college.’
Setting this issue aside, how are national standards (NS) derived from the government’s NCEA Level 2 target?
‘PACT (NS) measures (thus values) only numeracy and literacy.
The implications of this is that it narrows our curriculum.
PACT (NS) assumes that learning is linear.
The implications of this is focusing on areas of weakness (alleged gaps) as opposed to strengths.
PACT (NS) levels come from working backwards from level 2 NCEA. The formula:
All 5 year-olds = Year 12 minus 7 years
The implications of this is that the labels At, Below, Above get distributed by age 6.
Once ‘below’ is issued, a child has to work twice as hard to get to ‘At’ as the ‘gap’ is cumulative.
BUT REMEMBER this is only in numeracy and literacy.
If you did happen to have an edge in another curriculum area you won’t have time to pursue that strength. Any additional learning time is likely to be spent on MORE reading.
PACT (NS) assumes tidy year levels where kids should achieve numeracy and literacy standards based on age.
The implication of this is factory model pedagogy.
PACT allows National Standards to take the focus of assessment.
The (already) implications are that newer teachers are now only assessing to the National Standards (at, below, above) and not to curriculum levels.
PACT (NS) values Pakeha ways of Knowing over Maori and Pasifika pedagogies.
The implications of this is assimilation. (i.e. whose standards?!)
PACT (NS) values other assessment tools that assume that achievement and success is something that is carried out in isolation (eg. eAsttle writing). Even worse, assessments that reward one clean correct answer (STAR, PAT) .
The implication of this is a future population who have been rewarded for rote learning and problem solving (in timed isolation) independently – as opposed to collaborative, creative, critical thinkers.’
Let’s have a closer look at how national standards levels were determined.
As Tara has explained, the starting point was NCEA Level 2. The requisite achievement attributes were defined, and then these were distributed over the intervening year levels to make a supposedly linear set of steps from one year to the next, raising the achievement bar incrementally each year. This completely overlooks the evidence that children’s learning is not linear, but more typically a series of steps and plateaus, of learning and consolidation.
There are no references to research that shows that the decreed levels of achievement are valid descriptions of age related abilities.
As explained previously, the potential age range of children at a given year level, for example, year 6, could be as much as 11 months.
Is the government now decreeing that all year 6 children will be able to achieve at the same level, regardless of their actual age? Is anyone actually thinking in the government? Or at the Dominion Post, going by this obsequious Editorial: Teachers in way of standards’ where the author states,
‘The time has come for teacher unions to accept that national standards in reading, writing and mathematics are here to stay.’
Really? Has s/he not read the statements from the Labour and Green education spokespeople that make it very clear that there is no place for national standards?
As with previous DomPost editorials about national standards, all this nonsense proves is that the author is well below any standard for reputable journalism. Any journalist with a modicum of ability and integrity would have no difficulty in writing an accurate and reasoned opinion piece. Standard: not achieved.
The only valid goal in the government’s educational policy is that all children should be competent in reading, mathematics and writing, albeit to age appropriate levels.
Are national standards the way to achieve this?
Most certainly not.
Is there a better way?
Most non-educators will not have read the national standards. This is your lucky day.
Some examples for you:
How would you judge your child’s achievement against these?