English Language Assessment for the Business Processing Outsourcing (BPO) Industry: Business Needs Meet Communication Needs

The ability to communicate well in English with native speaker customers on the phones, especially those from the USA, UK and Australia, is a much valued commodity in the newly established outsourced and off-shored (O & O) call centres in post - colonial Asian countries such as the Philippines and India. But how is this commodity sourced, developed and measured within the business processing outsourcing (BPO) industry in Asia? This article chronicles, from an applied linguistic researcher /consultant stance , the development of language measurement tools and processes for English communication in these OO something owned and operated by the industry stakeholders themselves and something tailored to their business needs. This article provides an engaging account of the risks and rewards in crafting the BUPLAS solution that is becoming a popular communications assessment choice in the call centre industry in Asia. Key words: Business Communication, Assessment, Call Centres. This paper describes the development of an English language assessment system for the Business Processing Outsourcing (BPO) industry, with specific focus on the newly established calls centres in Manila. Structured as a chronicle, the paper traces the observations of, key insights into and decisions taken about the development of a set of tailored Englishlanguage assessment tools and processes to meet the highly specific needs of the Asian call centre industry. These assessment tools and processes have now become known as the Business Processing Outsourcing Language Assessment System (BUPLAS) and are used in an increasingly large sector of the BPO industry across Asia.


Introduction
This paper describes the development of an English language assessment system for the Business Processing Outsourcing (BPO) industry over a 5 year period (2003)(2004)(2005)(2006)(2007)(2008), with specific focus on the newly established calls centres in Manila. Structured as a chronicle, the paper traces the observations of, key insights into, and decisions taken about the development of a set of tailored Englishlanguage assessment tools and processes to meet the highly specific needs of the Asiancall centre industry. These assessment tools and processes have now become known as the Business Performance Language Assessment System (BUPLAS) and are used in an increasingly large sector of the BPO industry across Asia.

The BPO Industry in 2003 in the Philippines
In 2003 non-native speakers of English with good levels of English language proficiency were readily available from top universities in Manila, and, at that time the call centre industry in the Philippines employed about 50,000 CSRs which grew rapidly over the four years I was there, to about 350,000 in 2007. 2011 has seen this number increase to 750,000 with the Philippines now ahead of India as the preferred destination for voice support for customers.
Soon after I arrived in Manilain 2003 with a recently completed doctoral these on English language curriculum and evaluation processes in large multinational Hong Kong workplaces, I was invited into Texman, as a consultant, to investigate the causes of the communication breakdown on the phones and asked to provide recommendations. The call centre at Texman at that time had just started with about 120 CSRs who spent their days (nights) on the phones assisting American customers with a vast array of problems ranging from address changes to insurance claims to the purchase of new insurance products. It became clear that I really needed to understand in detail what the Texman call centre didand specifically what the CSRs needed to accomplish on the phones when working with these American customers in English.
One Texman account manager complained: The customer satisfaction scores are well below what we predicted and many customers complain that the calls take too long, the CSRs don't appear to know what they are doing and they have difficulty in quickly understanding the needs of customers and getting to a solution.
(Account manager -Filipino male in early 40's; data collected as part of a consultancy assignment-2004) And the Country Manager said: We are going out of our way to get the best and the brightest graduates for Texman, we offer them good salaries, good product training but some of them come undone when they hit the phones…some of them complain that the customers are difficult to understand and that when customers get angry they forget everything they've learned…maybe we need more intercultural training?

(Country manager -British male in late 50's; data collected as part of a consultancy assignment -2004))
Call centresin the Philippines and India have been popular places for young upwardly mobile local university graduates to work. However the work is not easy and typically the CSRs work through the night serving American customers during the day. At Texman, teams of 12 CSRs work together on the floor and are supervised by a team leader who monitors and supports them. Texman is an 'in-bound' call centre which means that the CSRs receive calls from customers, as opposed to 'outbound' callcentres that 'cold call' customers with sales pitches.
At Texman, the CSRsneeded to provide accurate and complete information about the insurance products they were dealing with in a professional and timely manner. This involved knowing and communicating the insurance product details, logging into the screens to get customer details and entering any decisions or problems resolved on the call. An account manager headed up each of Texman's three different accounts and was ultimately responsible for the quality of communication of the team of CSRs. The three Texman accounts were 'Change of Contact Details'; 'Insurance Policy Enquiries' and 'New Insurance Products and Services'. Experienced CSRs said they could meet call target numbers (about 100 per day) and had developed strategies for working within the desirable time limit of 3 minutes per call. Quality measures probecall volume, product accuracy and communication (sometimes called professional) performance. One newly hired CSR said: I'm terrified of going live on the phones as you never know who exactly you're going to get! It could be a really aggressive young American who hates that she's talking to a foreigner or it could be a sweet old man who just can't hear what you're saying…I mean you have to be quick to profile your customer, listen carefully, work the screens and be polite and caring and all that…and not take more than 3 minutes…At the beginning I just couldn't meet the quality targets and lots of people I started with have left, the pressure was just too much.
(Newly-hired Texman CSR -female, mid 20's, data collected as part of a consultancy assignment-2003) The local and on-shore business managers of Texmanunanimously complained right from the start about communication breakdown on the phonesfrom the Philippines. This surprised me as the spoken language level of the new graduates from some of the best universities in Manila seemed to me to be better and much more fluent than their Hong Kong graduate counterparts. Added to this, young Filipinos seemed to have an affinity for all things American having been, in the past, under American colonial governance. However, this communication complaint was fast becoming a serious businessissue. I spent the first few weeks observing the CSRs at work throughout the night, talking to the business managers, team leaders and quality personnel, 'barging into' live calls at night and listening to recorded calls during the day. Each 'day' at Texman started at 9pm when America was waking up and as we entered the call centre in the dark everyone would cheerfully greet eachother with "goodmorning' and would get ready for the day's work which would go through to 6am. Typically the day would start slowly as Americans finished breakfast and got off to work. By 11pm the phones were 'hot' and queues of customers were waiting to ask a whole range of insurance policy questions. Gradually the day would start to wind down about 5 am and by 6 am the phones had stopped and the CSRs headed home or into town for a relaxing breakfast beer.
This research was compelling, and the hundreds of recorded and live calls I was able to accesskept me busy for weeks. It became clear to me as I listened and transcribed some of these calls that the reasons for communication breakdown were highly complex. Some (a small number) related to the 'politics' of outsourcing where customers would accuse the CSRs of taking American jobs; others related to the linguistic and cultural demands of the different accounts,and some related to business and legal requirements in dealing with customers that interfered with smooth communication on the phones.
As suggested earlier, the communication demands of each of the three accounts at Texmanwerevery different. These accounts comprised routine work (e.g. changing customer contact details); insurance claims that were mostly straightforward but required good product and procedural knowledge (e.g. closing out claims); and more complex work, promoting new products (e.g. advising on the right type of insurance coverage for the customer). Much of the communication breakdown appeared to be occurring in the non-routine insurance claims and the newproduct and policy advising accounts.What I observed on the more difficult calls was a tendency for customers to become frustrated and lose their tempers. Filipino CSRs found this 'directness' a difficult cultural and linguistic problem to deal with and often reported 'freezing up'. The CSRs would also be dealing with difficult personal circumstances (e.g. a family member dying or even committing to a new and expensive insurance product); again understanding cultural norms and using an appropriate communication style became problematic in these situations.
There were a few instances in the call data where breakdown was attributable to linguistic problems in  -2003-insurance claims. Quoted in Lockwood, Price andForey, 2008. P. 163) However, this kind of phonological breakdown was rare. Poorlexico-grammatical choices were also evident, for example the Filipino tendency to use the modal 'would' instead of 'will' in certain circumstances caused communication problems; there were instances when the CSR was making promises to do something urgently and instead of saying 'I will do this immediately', would say 'I would do this immediately'. When I talked to CSRs about why they did this, they said they were taught that 'would' is more polite, not realizing that using it like this compromised the 'promise'. However, most misunderstandingson the phones appeared to relate to the CSRs not being able to easily profile their customers nor understanding their very nuanced meanings such ascustomer sarcasm, jokes and long silences. Being able to effectively build customer relationships and explain product information and policy regulations appeared to be a constant communication challenge also. For a full discussion on this early research see Forey and Lockwood 2007, Lockwood, Price and Forey 2008, Friginal, 2007 Anotherkey problem that put pressure on smooth communication on the phones related tosome of the Texman business requirements ; for example Texman require agents, in the routine account responsible for contact detail change, to 'upsell' their products at the end of the call. Specific 'upsales targets' were set for the CSRs and would be reviewed on a monthly basis. The data showed that successfully transacted calls in this account were being marred by customers getting angry at being targeted for the sale of new products.Another example of business requirementinterfering with smooth communication was where CSRs were required to rebut a customer policy cancellation request three times. In listening to sample calls where this policy was followed, it was clear that it was frustrating and angering manycallers who were adamant about having their cancellation request processed immediately. This particular rebuttal policy was often accompanied by a 'script' that the CSR was encouraged to read if the customer became resistant which, as often as not, made communication even more fraught. One of the CSRs complained:

If I don't follow the business rules about rebutting the policy cancellation request three times this would be recorded on my quality assurance monthly scorecard and would affect my bonus and appraisal, so I do it. But then what often happens is the customer gets really upset and then I have to defuse the anger and deal with that…which can also affect my scores if I don't manage to calm the customer down…it's a lose lose situation. (Filipina call centre CSR-female -late 20s; data collected during consultancy assignment 2004)
Disentangling the politics, the business requirements and the linguistic and intercultural problems was an early challenge as they were all, equally presented to me as 'communication breakdown' and strongly implied it was the CSRs English language ability that was the problem. It was important to get the business to understand that these political issues and business requirements was not a problem of English language competency and I wondered how this might have affected smooth communication on shore.
However, I wanted to remain focused on the CSR as second language speakers. The Texman business managers and trainers attributed much of the CSR communication breakdown to first language/mothertongue interference mistakes (MTI) as theyclaimedthese led to grammar mistakes and comprehensibility difficulties. However, when listening to the calls I found communication breakdown rarely had anything to do with MTI and much more to do with intercultural and language issues in understanding the nuanced meanings in the calls, with the way in www.ccsenet.org/ells English Language and Literature Studies Vol. 2, No. 4; which they organized their responses and with interactional rapport building skills (Hood & Forey, 2008;Forey & Lockwood, 2007).
In the extract of aTexman transcript below, the customer was complaining that he was not notified about an interest payment: Such problems in the calls as described above were generally not addressed in the coaching sessions as the quality assurance personnel were not trained English language specialists. Feedback would be ad hoc and the QA specialists often had the same challenges as the CSRs they were supposed to be helping.
As a result of these early conversations and observations a number of questions emerged as follows: (i) If the CSRs are the best and brightest and are still getting quality complaints for communication ability, how are they being recruited and assessed for English communication?
(ii) Apart from product training, what kind of English communications and intercultural training are they getting, how are the measured and what kind of benchmarks are they expected to reach before they go on the phones?
(iii) What kind of diagnoses are being made from problematic calls and what kind of feedback to the CSRs get?
(iv) How are the CSRs being measured for communication ability as part of their quality assurance processes?
All these questions were, in different ways,probing the ways Texman measure what they called 'good communication', their 'core commodity'. I will now chronicle my observations and questions in recruitment, in training and in QA and coaching.

English Language Communication Assessment Challenges in Recruitment
There was, and is, a lot of 'best practice wisdom' circulating amongst the call centres in both India and the Philippinesand 'myths' about the reasons for communication failure.Mother tongue interference (MTI) in spoken communication andinaccurate grammar are two favourites. From a layman point of view, one can see how a non -linguist mightpinpoint these as possible problems in using Indian and Filipino speakers as call centreCSRs for USA customers. As one American born Filipino account manager said: The solution is simple-get rid of what makes us different, and then we'll be the same. We have an accent and we are making grammar mistakes… (Texman Account Manager-male -late 40's; data collected during consultancy assignment -2004) The problem was that Texman, like many other call centreshad singled out only these two issues when recruiting, when training and when checking for quality. The washback of this was evident in their communication assessment tools and processes in recruitment and resulted in highly unfair recruitment practices such as asking prospective agents to read tongue twisters targeting known MTI problems. Such common phonological mistakes found in Tagalog speakers of English such as the /p/ and /f/ and the /v/ and the /b/ consonant sounds being homophones. I witnessed a young, anxious Filipino agent reading the following tongue twister in an employment recruitment interview: appear to be highly inappropriate given the likely phonological mistake that the prospective agent might make, but the interviewers seemed to find this funny. Needless to say, very few applicants passed this 'pronunciation test'.
The other popular assessment task at recruitment was the completion of a grammar test comprisingmostly of verb subject agreementexercises as this was known to be a common Filipino English problem although according to recent research in business English as a lingua franca, not one that causescommunication breakdown (see for example, Jenkins, 2007). Below was an extract of the Texman grammar test.
Pick the correct answer: 1. Everyone go to the party 2. Everyone goes to the party 3. Everyone was gone to the party 4. Everyone has go to the party At Texman, when this kind of recruitment assessment process unsurprisingly yielded only about 1% success rate, the senior management in HRO, instead of abandoning these practices, added even more tasks and processes to the recruitment interview including sending native speaker Texman HRO recruiters over from Houston to 'get the right people'. This decision seemed to me to beborne out of an insecurity about finding out why the current assessment was not working well. Falling back onto onshore practices was a common response. After Texman recruiters from Houston became involved, the recruitment success rate fell even further. I asked them what they were looking for in the interview process and one of them said: Clearly, I thought, there was must be a better way to do this. The solution must surely lay in drawing on good language assessment practices and importing them into Texman.
It was becoming important for Texman to simplify and strengthen its recruitment assessment process which often took well over 2 hours to complete and used many resources. After all, a good IELTS speaking assessment takes 12-15 minutes, so why were HRO taking all morning to decide on whether the prospective CSR has good enough spoken English levels to work atTexman? However, it wasn't only getting a benchmark, but also getting a 'granularity' in the benchmark for the different Texman accounts that was important, as explained earlier in this article. Only through such assessment granularity could CSRs be placed appropriately into accounts that matched their levels of spoken English; lower levels in the routine accounts and higher level speakers in the more complex accounts. Texman did not know what level of English was required for the three accounts they had moved to Manila from Houston; they said they wanted very good speakers but were all three accounts equally challenging in terms of communication skills? This notion of account communication difficulty was obviously a new one to the business, where such on-shore accounts were staffed by native speaker CSRs. and the main purposes for calling in. It would also entail listening to samples of 'good' and 'bad' calls and interviewing account managers, quality assurance personnel and CSRs themselves to see where the communication 'pressure points ' appeared to be. Intuitively I felt this particular Texman account to be an easier account to work on than the insurance claims and policy advising, but I needed a valid and reliable process for benchmarking each account. This process would also ultimately inform the level of granularity I might need in the Asian call centre sector.

English Language Communication Assessment Challenges in Training
Texman provided 6 weeks of product training (4 weeks) and communications training (2 weeks) prior to going onto the floor. The communications training at Texman was fragmented into four different very short 18 hour courses as follows: Grammar lessons-to encourage language accuracy at the sentence level delivered through a teacher -centred content delivery approach which included the completion of grammar sheets; Accent neutralization-to minimize the impact of the mother tongue in spoken English through drills; Soft skills training -to develop an understanding customer care, this was a 'content' course, and American culture training-to develop a knowledge of facts about America. Problems with this'communications' syllabus model have been discussed elsewhere (Friginal 2009;Lockwood 2012). Communications trainers provided classroom teaching 8 hours a day and the timetables were packed with input about American culture and how to be polite to customers; there seemed to be very little attention paid to language development and fluency practice. One trainer complained: At the end of the ten days of training we're exhausted…it seems we do all the work and the talking which doesn't seem right. We give the trainees quizzes and then at the end we've downloaded speaking assessment tasks from the internet; we have our own scoring system. We add the marks up and then if they don't reach the grade, they don't go any further and they are asked to leave the company.

(FilipinoTexman trainer-male-early 30's; data collected as part of consultancy assignment 2005)
Another trainer at Texman who was Teaching English to speakers of other languages (TESOL) qualifiedwas highly critical of their practices and commented:

Texman expect us to work miracles in English communication levels…two weeks is nothing and much of this is a 'stand and deliver' type of method where the CSRs are getting information overload on American culture and
customer care…there is virtually no real spoken language learning and fluency practice going on. The content quiz at the end is supposed to reflect increased language performance levels…it has absolutely nothing to do with spoken language levels so we may as well not even do it.Adding up quiz scores definitely does not mean improved fluency and better ability to deal with customers on the phones.

(Texman TESOL trainer -female -late 30s -Filipina; data collected as part of consultancy assignment, 2005)
There appeared to be no rationale behind the cut -off grade of 96% on the language courses. Most of the trainees got over 96% and the ones that failed got between 90-95%. This was a metric that was entered into the system although it bore relation to the score the CSR received at recruitment, nor would it in any way relate to the scoresentered for quality assurance. Given this was the case, it was hard to fathom why Texman bothered to enter and act on these metrics. One of the Account Managers said: We like to have these scores from the very beginning of when they come to Texman to measure problems and improvements.These scores are used at the time of annual appraisal but we don't really know what they are supposed to measure.

(Texman Account Manager -Filipino -mid 40s; data collected as part of consultancy assignment, 2005)
Once on the floor and immediately after training, Texman put the CSRs into a 'nesting' program where they work alongside more experienced colleagues until they feel confident about taking calls on their own. This 'scaffolded'support in the early CSR work on the floor appeared to work well. It was clear however that after the 'nesting' period, which can last up to 3 months, most of them still required coaching intervention to ensure that they reached the stringent quality assurance targets that were set for them by the business.
Whilst any kind of pre and post course summative assessment on a short 10 day communications programme would not show growth), it occurred to me that a bespoke speaking rubric would be very helpful as a formative assessment tool as the CSR transitioned through product and communications training and into the 'nesting' period and onto the floor. So the requirement to make the spoken assessment rubric suitable for formative as well summative purposes emerged. Having a formative assessment speaking tool based on carefully chosen criteria would also provide positive washback into the coaching sessions and perhaps address some of the issues raised by the TESOL trainer earlier in this article.

Nesting is great insofar as the CSRs get a buddy at a time when they are feeling very anxious about taking live calls and talking to real customers; but when you look at the kind of feedback they get on their spoken English, it is a real problem as the coaches don't know what they're talking about, and also their English isn't that
good.Ironically some of the problematic CSRs get moved sideways into coaching and quality …I observed a CSR yesterday getting into trouble about sounding too Filipino and not having a verb subject agreement….these issues, I didn't feel , threatened communication….and there were problems on her call around organizing her information more clearly for the customer and keeping better control of the call…I thought the advice she got was poor because the diagnosis was poor and we have nothing to guide us…so we're in a vicious cycle.

(Texman trainer and coach, female-30's; datacollected as part of consultancy assignment -2004)
The idea of having a systematic tool across the business that could serve the multipurposes of placement, ofdiagnosis, of training and nesting achievement used formatively and summatively, seemed attractive. Such a set of tools and processes could also be carried into quality assurance. Another need for the assessment tool that arose out of observation of coaching was the need to benchmark the coach's own English language ability! I also began to think that a sociolinguistic framework would perhaps provide a more holistic and systematic view of language that could incorporate soft skills, culture, lexico-grammatical and pronunciation into one communications training and assessment approach. I turned to Canale and Swain (1980), Canale, 1983, Bachman, 1990 and their early definitions of communicative competency which seemed highly relevant to this worksite context. Such a theoretical construct would also introduce criteria such as discoursal and interactional competencies, thus challenging the sole reliance on MTI phonological and grammatical mistakes in assessing speaking.

English Language Communication Assessment Challenges in Quality Aassurance and On-the-job Coaching
On-shore and offshored call centres around the world, typically employ outside vendors to provide customer feedback satisfaction scores and this is known as the quality assurance tool is known as the CSAT (customer satisfaction) measure andTexman was no exception. These CSAT measures are problematic as the questions typically asked about communication are very generic and it is hard, in the final analysis of this data, to disentangle what may be a genuine communication problem and what may be another problem affecting communication; note the earlier discussion about the entanglement of the politics and business requirements in perceptions of communication levels. In fact, one large outsourced callcentre in Cebu has found the CSAT feedback to be significantly contaminated by the fact that it is an outsourced service (i.e. Americans do not like the fact that work has been moved to the Philippines, Costa Rica and India and therefore provide poor evaluation of communication in the CSAT). This is an important finding given that the external CSAT measure is often set at an agreed benchmark level in the Service Level Agreement for outsourced call centres.
The internal scorecard used at Texman is an internal quality assurance measure and is administered on a regular basis in each account. Judgments are made by quality assurance (QA) specialists on a monthly basis on recorded calls. This score provides a basis for formal performance annual review and is also used for coaching purposes. The scorecard covers both product knowledge and communication competency as a list of items to which QA personnel assign a yes/no response. This list is typically put together by QA specialists and when I was working in Manila these scorecards looked remarkably similar across different sites. Scorecards are yet another 'common sense 'solution developed by the business but reflects a reductionist view of language communication and business performance.The overall score will indicate whether the CSR is meeting the standard set by Texman. Examples of statement questions taken from the Texman scorecard (internally known as 'The 35 golden points of service') were: Did the agent use the customer's name at least three times during the call? Y/N Did the agent deal with the customer is a professional manner? Y/N Did the agent verify the details of the customer? Y/N A critique of the scorecard can be found in Lockwood, Forey and Elias 2009. They comment: The important point is that it (the scorecard) only addresses attitude and product knowledge, while ignoring both language skills and how insufficient language skills can present as behavioural and attitudinal problems. A completely different approach needs to be taken in a situation where CSRs are not operating in their mother www.ccsenet.org/ells Vol. 2, No. 4; tongue and are dealing with customers from a different culture.
(Lockwood, Forey & Elias,p 158) Texman complained that there was a chronic mismatch between the CSAT score and the internal scorecard measure and there was constant pressure on account managers to correlate these measures.I felt this was difficult to achieve given the variables at play in the CSAT measure. I could however work on improving the internal scorecard as an English language communication measure.
It was becoming even more clear to me at this stage that it would be helpful to Texman to have a systematically developed communications assessment tool that could be used summatively for recruitment and quality assurance purposes, and formatively for training and coaching purposes. Furthermore such a tool, in order to be valid, should reflect a theoretical construct for spoken measurement that selects criteria based on the needs of CSRs at Texman; such as a communicative competency framework as previously suggested by Canale and Swain (1980). Communicative competency criteria for a speaking assessment tool could then inform the scorecard and the rubric. This would bring the communications assessment much more in line with 'good practice' in spoken language assessments in high stakes standardized examinations worldwide that scores against criteria and a scale. Clearly it should equally be informed by the specific business and communication needs I was learning about in the call centres. Bringing these two together in BUPLAS was and is an on-going challenge (Davies, 2010;Lockwood 2012).

Developing a Theoretical Assessment Framework for the Call Centre Context
Given the key contextual challenges in the Texman call centre outlined above, and given my observations about the way the business had responded to these challenges, it seemed to me that I needed to be thinking about two things concurrently. First I needed to think about the context of the call centre and specifically who would be using these assessments. I also had to think about a theoretical construct that could underpin Texman's approach to what they assess, why they assess and how they assess. It seemed clear to me that Texman needed to embed a communication assessment capability within different parts of the business, namely, recruitment, training, nesting and coaching and quality assurance.The assessment also needed to reflect specific business requirements, for example average handling time (AHT and first time resolution (FTR).
The following contextual needs for the communications assessments can therefore be summarized as follows.
First there was a requirement for both summative and formative assessment tools and processes within Texman. Summative assessment tools and processes were needed by the business to make hiring and quality assurance decisions e.g. are they good enough to work here (recruitment); are they meeting our QA standard (on the floor)? However in the communications training, nesting and coaching processes, formative assessment tools were required to ensure communications improvements on the phones through good diagnosis and feedback processes.
Secondly this set of summative and formative assessment tools and processes would need to be shared and embedded across all stakeholder groups at the key points of agent contact (e.g. recruitment, training, nesting, QA and coaching). This requirement meant that the tailoredlanguage assessment tools and processes should be owned and operationalized by Texman employees and the knowledge and processes transferred to them.Texman did not want to rely on an expensive outside provider of 'tests'; they wanted something that could be owned and used by key employees and embedded in their workflow and systems; so whatever was to be developed needed to reach an audience unfamiliar with L2 speaker problems and applied linguistic practices in rating language assessments. This was of initial concern to meas I was not aware of any other worksites that had embedded and trained up their own employees to carry out, and take ultimate responsibility for, summative and formative language assessments. The conventional wisdom in the applied linguistic and language testing field was that a language assessor should not only be an expert English speaker and writer, but also have a TESOL background and be trained specifically to carry out the language assessments. There is for example, a very lengthy process in accrediting an IELTS speaking assessor and this is the norm in high stakes international testing. Could I create a team of Texman worksite assessors internally, and what were the training and quality assurance implications of trying to do this? The overwhelming advantages of having alignment around a set of assessment tools and processes and a common metalanguage across the Texman workplace was clear, but what were the potential problems in doing this?These were some of the keychallenges in which the Business Performance Language Assessment System (BUPLAS) was planned and developed.
There was the need for atheoretical construct which would underpin the assessment tools and processes.What theoretical framework was most suitable for assessing Texman CSRs, tagging accounts and assessing other internal employees such as quality assurance personnel? In late 2004 I started with a framework, as suggested www.ccsenet.org/ells English Language and Literature Studies Vol. 2, No. 4; earlier, for spoken assessment that probes the components of 'communicative competency' (Canale & Swain 1980;Canale 1983;Savignon, 1983;Bachmann 1993) as a starting point in determining a set of criteria that related to the work at Texman. Drawing on this work I decided the following features needed to evident in the assessments to ensure successful communication on the phones. First , language competence, i.e. the ability to make choices in the lexico-grammatical system and the phonological system to make intended meaning; secondly, discourse competence, i.e. the ability to recognize and construct the flow of appropriate spoken text;thirdly , sociolinguistic competence, i.e. the ability to understand intercultural nuance and meaning; fourthly, interactive competence , i.e. the ability to make appropriate interpersonal choices when building rapport; and finally strategic competence, i.e.the ability to repair language breakdown, particularly in spoken language.
I started with thiscommunicative competency framework and the criteria for the BUPLAS rubricdeveloped finally included: (I) pronunciation: I decided not to include MTI interference mistakes unless this was causing communication breakdown; but to includemore meaning-making problems that had emerged from the early research (Wan, 2010Forey, Lockwood & Price 2008, Cowie, 2010, such as intonation and word stress (II) lexico-grammatical accuracy and range: I decided not to include language accuracyonly,as spoken language is full of inaccuracies (Luoma, 2004). I also included the need for making good language choices to make meaning that emerged from the early research (Forey & Lockwood, 2007;Hood & Forey, 2008) e,g, modality, tense choices; therefore 'range' was important (III) discourse: I decided to include this criteria as it seemed to be a common business requirement for CSRs to be explaining procedures and providing lengthy explanations of surrounding their products and services; it also emerged from the early research that the clear and logicalorganisationof the message was causing breakdown (Lockwood, Forey and Price, 2008) (IV) interactive/strategic: I decided to include this as it seemed that the work of the CSRs was concerned with managing the customer relationship and repairing and misunderstandings; and again this emerged from the research. I was most concerned about this criteria as although I had worked with 'speaking 'scales and descriptors for a range of examinations, there was nothing that attempted to probe as precisely as this criteria into the relationship building capacity of the candidate. However, this was such a strong requirement of the CSR at work, that I felt it important. (Lockwood 2012) The rubric was developed as a five level scale in a first draft of BUPLAS in 2005 and piloted in Texman as well as in a large outsourced call centre provider in Manila. Substantial training of business personnel to use the BUPLAS scales and descriptors for the purposes of recruitment took place and improvements were made on the assessment tasks and procedures to ensure increased validity and reliability of the process. Whilst there were some initial problems in ensuring a good understanding of the rubric and calibration, these were achieved through further training and support of the recruitment team.Building on the success of BUPLAS in recruitment, Texman then decided to use the BUPLAS tools and processes in training and coaching and finally as part of their quality assurance processes.Different groups of Texman employees were trained to use the scales and descriptors for summative (QA monthly reporting) and formative (coaching interventions) purposes. Provided that the trained assessors were of a high BUPLASspeaking level themselves, I was pleasantly surprised with how readily I was able to standardize the first groups of Texman recruitment, training, coaching and quality assurance assessors to use the BUPLAS scales and descriptors.TheTexman HRO was delighted and said:

Conclusion
BUPLAS remains a tantalizing research site and continues to develop. The extent to which BUPLAS took on the business requirements of the call centre, including the embedding and handing over of the assessments, is no doubt reason for part of its success in the industry. Because of the length of time I spent immersed in the Texman worksite, the BUPLAS assessment development could be seen as an extended type of ethnomethodogicalstudy where, through the access to the business informants including the CSRs and their phone conversations, through document analysis as well as analysis of communication breakdown form the authentic recorded data, I was able to build a set of 'indigenous criteria' (Douglas, 2005) for communications assessment at Texman. As Jacoby (1998) proposed, we need to explore what 'insiders'say and do as part of their professional culture. Such insider criteria can only be identified through detailed observations of the workplace tools and practices. It is often the case, as reported by Douglas (2005) and Jacoby & McNamara (1999) that despite passing a specialized test, the candidate still fails to communicate in an appropriate way on the job. What is the point of a tailored test, such as the Occupational English Test (OET) used in Australia to screen overseas doctors into the medical profession, if the doctors then fail to communicate well with patients on the wards? This has been a perennial concern of language for specific purpose (LSP) testing (Lumley & Brown, 1996;Jacoby & McNamara 1999). Tying the BUPLAS assessment tools and processes very tightly into the business requirements and thereby building precisely on the notion of incorporating 'indigenous criteria' into performance assessment (Douglas 2005) for LSP assessment, was key. The business connectedness of BUPLAS is strong as is the incorporation of research outcomes surrounding communication breakdown in these off shored and outsourced call centres in Asia.The simultaneous coming together of these two important factors may go some of the way to explaining its success to date in the industry.
The BUPLAS tools and processes have now been adopted by a number of call centres in India, Panama and the Philippines and these businesses are reporting better conversion rates at recruitment, better diagnosis and feedback at coaching and better indicators of the quality measures, all of which that are so important to this industry (Lockwood 2012). Understanding and responding to the business needs of the industry, and ensuring fairness to the agents in getting and securing their jobs as CSRs in Asian call centres, has shaped, and continues to shape BUPLAS.

Author
Dr. Jane Lockwood completed her doctoral studies at Hong Kong University investigating the English language curriculum and assessment needs of Hong Kong workplaces. She has recently published in the area of communication breakdown in the business processing outsourcing (BPO) industry in the Philippines and India.