In a post on the for-profit Phoenix University, John Quiggin argues that:
The unimpressive performance of Phoenix and Edison, along with the complete failure of other for-profit initiatives and the absence of any significant success stories in the for-profit sector over many centuries and many different countries and institutional environments suggests that the for-profit model is fundamentally defective as a way of providing education.
Why is this so? The obvious reason is that educational institutions must build up reputations over decades, since the consequences of cutting corners take decades to emerge. This makes no sense in terms of profit maximization and individual incentives, so it requires a professional/academic ethos which at least in part, overrides narrow forms of individual rationality. Such an ethos cannot survive in a for-profit firm.
I don’t know the Edison story, but Phoenix itself has been pretty successful – 300,000 students and a fairly good stock market performance, though the index of listed for-profits published annually by the Chronicle of Higher Education shows the sector trending down since its 2004 peak, but still well above where it was when it started in 2000. Phoenix says its graduation rate is 50-60%, which is probably comparable to the similar client group (mature age students) in Australia, for whom the <a href="http://www.dest.gov.au/sectors/higher_education/publications_resources/statistics/higher_education_attrition_rates_1994_2002.htm#The_findings_in_more_detail
Reputation obviously, but also space. UWA has a beautiful campus on the edge of the river in a high-class suburb. Tennis courts, rowing clubs, great free lunch-time concerts, substantial libraries, laboratories … that’s what our picture of a university is. It’s fine to do an online vocational course, and outfits like Phoenix probably do it well – but what company could afford the land and infrastructure to compete with our traditional universities, and then make a profit?
(Andrew, if you’re not getting more intelligent comments than mine it’s because the site seems slower than ever – I like your posts, but I miss the comments!)
LikeLike
Andrew you have to switch hosts. Your site is inaccessible 2 days out of 3.
LikeLike
Do you really think all f these rankings really affect beliefs about the quality of well knwn universities? The ranking of places like Harvard, Princeton, Yale, Stanford, UC Berkely, Cambridge, Oxford, LSE and even ANU and Melbourne on these lists would be irrelevant to me if I had to choose where to go to study. The various rankings tend to be more use when it comes to assessing the quality of lesser known institutions.
Do you really think the CEQ has been a great innovation? One of the supposed learning objectives is “teamwork”. Why should universities be teaching students how to work as part of a team? In what sense is that a key outcome of a university education?
LikeLike
The US research finds that about 20% of students say rankings are a ‘very important’ reason for choosing their college. But they affect the vanity of 95% of university administrators. It is very clear that rankings affect university behaviour (not always in desirable ways, as I have blogged on before).
The CEQ has been much criticised, but I think it was a great innovation in shocking universities out of their complacency surrounding teaching. Even today, after a decade of slow but steady improvement, graduates are still giving their experience a ‘fail’ on four out of six teaching questions. Many positive things have flowed at least in part from it: internal surveys to identify weaknesses at the unit and lecturer level, removing weak teachers from the classroom, teaching competency being necessary for promotion, at least rudimentary teacher training becoming common for new staff, far more interest in technology as a teaching tool.
There is no obligation on any university or department to take notice of all the questions, and indeed in the last few years the CEQ has changed so that universities can select questions that seem relevant to their particular mission or disciplines rather than every graduates being sent an unvarying standard list set of questions.
But most of the remaining common questions seem sensible to me – about how good staff were at explaining things, on whether helpful feedback was provided, on whether expectations of students were clear etc.
LikeLike
Andrew, I have five basically independent points on this
1) I think its not really true that universities are self-accrediting. For many courses, the lowest standard is basically set by some external professional body giving accreditation. Medicine, is the obvious one, but there are many others (perhaps the majority).
2) If you can provide any indication that the teaching certificate many of us were basically forced to get actually increases teaching performance by any significant amount, I’ll be impressed.
3) Universities like Deakin are completely tuned into providing educational convenience — they have huge amounts of off-campus students. If you have the numbers, you might like to compare there graduation rates with those of UP and the like.
4) Measuring success based on completion rates is stupid thing to do, and leads to the pass-everyone phenomena that exists in many places now. The best thing that comes out of it is that we can now charge full fees for master courses where you might learn something 🙂
5) I find it hard to distinguish for-profit versus not-for-profit in any meaningful way. Pretty much everyone is trying to make as much money as possible, its just some are willing to cut more corners than others.
LikeLike
Conrad – I agree that (1) applies to most professional qualifications.
On (2) it’s a good point; I have seen no before and after study. But something is causing the student satisfaction ratings to improve, in the face of other trends such as higher student:staff ratios that other things being equal we might think would cause a decrease.
On (3) and (4). Raw completion rates are difficult to interpret. Australian external students have the lowest same-institution completion rates – 44% in one 1990s study. But it is hard to know exactly why, since non-academic factors such as family and work responsibilities are a big factor for this group. About 30% of Phoenix’s students are entirely external. However, in my view completion rates should definitely be monitored as possible signs of poor selection or poor teaching.
(5) I agree. Even before public universities became surplus-seekers they were motivated by money, doing anything Canberra wanted in exchange for a bit of cash.
LikeLike
Andrew, I represent for-profit and non-profit tertiary education providers (vocational and higher) in NZ and have done a lot of work on their history. For-profit providers had a substantial presence in post-school education up until the early/mid 1900s in NZ (and Australia too), but as the govt started to increase funding to universities, polytechnics, TAFEs etc, the private presence started to decline. Many examinations were also set by industry bodies with open entry in the early days, but they were later run as the culmination to a specified course of study, which required accreditation – often limited to public providers. So, rather than any simply ethos-based reason, the decline of for-profit post-school education was mainly due to price-cutting and protectionism by the public sector. Non-profits survived mainly because community groups wanted to prop them up eg theological schools.
In recent years, it has been changes to accreditation and student loan/government subsidy rules that have allowed for-profit providers to grow. The international student market has also supported growth.
This doesn’t mean that the for-profit sector is suddenly going to surge, though. Governments are unlikely to full expose their institutions to competition and investors are unlikely to duplicate lush campuses like the UWA example given above, when the govt can change the rules quickly. That’s why Phoenix-type niches are likely to be filled first.
LikeLike