The advantages of using Sign Language in conjunction with cochlear.docx
Researching Practice Assignment 1
1. 101656 Researching Practice Tess Jabbour
University of Western Sydney
101656: Researching Practice
Assignment 1:
What accommodations (if any) are needed to allow Deaf and Hard of Hearing
students to fairly participate in NAPLAN testing?
Literature Review: education and standardised testing for students who are
Deaf and Hard of Hearing.
1.0 - Introduction
1
2. 101656 Researching Practice Tess Jabbour
This literature review highlights the diverse nature of Australian students who identify as
either Deaf or Hard of Hearing. The purpose of this review is to investigate how to best
accommodate for these students when undertaking the National Assessment Program
-Literacy and Numeracy (NAPLAN) which was introduced in 2008. There is very limited
information available specifically relating to the NAPLAN presumably due to its relative
infancy. A variety of global studies around standardised testing for students with additional
needs have been critically analysed in order to gain some insight into which
accommodations (if any) may prove suitable for students who are Deaf or Hard of Hearing
(SDHH) when sitting the NAPLAN.
2.0 - Deaf and Hard of Hearing Community: Language, Culture and Education.
SDHH are a highly diverse group. In Australia over 17,000 families have a child who has a
hearing loss (Deaf Australia, 2014). These children have a variety of characteristics that
affect their ability to communicate in both an educational and societal setting. Members of
the community who identify as Deaf or Hard of Hearing range from those who experience
mild hearing loss to those who are profoundly deaf (Cawthon, 2009). Some SDHH utilise
supports such as cochlear implants or assistive hearing devices. These students come from a
variety of family backgrounds ranging from fully hearing families to native Deaf families
where one or both parents are deaf (Cawthon, 2009; Woolfe, Herman, Roy & Woll, 2009).
As well as catering for the unique and varying degrees of hearing and supports these
students present with, the linguistic diversity both oral and sign must also be considered
when communicating and educating SDHH. Australia is a multicultural country comprising of
over 260 commonly spoken (oral) languages (Van Vliet, 2011). However the Australian
education curriculums, both public and private, are dominantly English language based. In
addition to the diversity of languages spoken in Australian homes, those in the Deaf
community are often exposed to signed languages (predominately Auslan) which have
unique vocabularies and are grammatically and syntactically different from the structure of
the English language (Deaf Australia, 2014). The level of exposure to signed languages
among SDHH depends on a variety of factors including whether or not they come from
native Deaf families, the degree of their hearing loss, available Deaf community support
services and the signed language proficiency of their families and educators (Woolfe et al.,
2009). The majority (over 80 per cent) of Deaf students in Australia are integrated into
mainstream classes on a full time basis (Hyde & Power, 2004). However, some academics
suggest that mainstreaming puts SDHH at a disadvantage. Aarons & Akach (2002) note that
Deaf students are completely excluded from learning in a ‘hearing’ classroom as they cannot
access the oral language and are therefore excluded from the educational process.
Evidence, from Herman & Roy (2000), shows that even within such a diverse community
many Deaf children are entirely monolingual in a signed language up until they begin school.
2
3. 101656 Researching Practice Tess Jabbour
Aarons & Akach (2002) supported by Glaser & Van Pletzen (2012) suggest that specialised
sign language centres, which would teach oral languages and text literacy through the
medium of sign are the most cost-effective, efficient and fair way to educate SDHH as it
gives them access to the curriculum in their own language. However, Glaser & Van Pletzen
(2012) also make the case that while this might indeed be appropriate, there is a lack of
appropriately trained teachers (only around 20 qualified sign language teachers in the South
African context of this study) to make it succeed and students are emerging from such
institutions with unconsolidated language skills. Glaser & Van Pletzen (2012) also refer to
the Linguistic Interdependence Principle which suggests that there is a commonality and
transferability across all languages. This principle states that “a common underlying
proficiency across languages will allow positive transfer to occur from a first to a second
language if there is adequate exposure to the second language and the motivation to learn
it” (Glaser & van Pletzen, 2012, p. 27). At the very minimum, the grammatical differences
between signed and oral languages puts SDHH at a clear oral linguistic disadvantage (both in
mainstream and selective schools) to hearing students and is a key justification for providing
accommodations in standardised testing scenarios.
In a (British based) study investigating early vocabulary development in native signers,
Woolfe et al. (2009) acknowledged the diversity among SDHH and conceded that the
structural differences between signed and oral languages puts SDHH, especially those who
are non-native signers, at a high risk for delayed language development. While Woolfe et al.
used the word ‘delayed’, signalling that the language development will eventually reach
expected levels, it is evident in a study by Glaser & Van Pletzen (2012) on inclusive tertiary
education that Deaf adults (especially those who predominately use a signed language) also
demonstrate poor text literacy skills. Glaser & Van Pletzen, noted that “…the average Deaf
school leaver has a written language comprehension ability equal to that of a hearing child
of eight” (2012, p. 25). This suggests that language development has not simply been
delayed it has been stagnated. A consequence of this might be limited options to participate
in tertiary education or interact in the workforce. Both studies emphasise the need to
address this as early as possible. Woolfe et al. contend that there are very few assessments
for Deaf children (0-3 years) who are signed language users and that consequently
“decisions about appropriate educational placements or recommended interventions…are
frequently based on assessments of spoken and written language skills…”and that
“standardised assessments of Deaf children’s early sign language acquisition are needed in
order to evaluate children’s communication skills in sign against normative developmental
milestones” (Woolfe et al., 2009, p. 322). In order to conduct their investigation, the
researchers in this study provided parents of 29 native Deaf students with a psychometric
reporting tool called a Communicative Development Inventory (CDI) on which parents were
required to tick age-dependent items to indicate their child’s expressive and receptive
vocabulary (Woolfe et al., 2009). The researchers justify the need for their investigation by
stating that children aged 0-3 years who are monolingual native signers are not studied in
3
4. 101656 Researching Practice Tess Jabbour
isolation but rather as part of a wider group of Deaf and Hard of Hearing children, many of
whom, are not native signers and are therefore more likely to be bilingual (communicate in
a sign language and an oral language). The longitudinal nature of this study meant that
researchers monitored progression over time which allowed them to conclude that a
“smooth and upward growth curve was obtained for early vocabulary development” (of
native Deaf children) and that “receptive scores were in advance of expressive scores”
(Woolfe et al., 2009, p. 322). This finding is particularly relevant to the NAPLAN which
comprises of four individual tests, some of which focus on expressive skills and others
focusing on receptive skills, as it suggests that different accommodations could be put in
place for different individual tests (see: 3.0). A school in Maryland (America) took the
further step of not scoring SDHH on one section of an individual assessment which required
students to complete rhyming tasks (which a profoundly Deaf student would have extreme
difficulty with) to demonstrate the ability to connect print to sound (Johnstone & Thurlow,
2012).
The small (29 native Deaf children) sample size of the Woolfe et al. (2009) study is noted,
however, it is justified by the author’s claim it is relatively large compared to other
standardised samples that are referenced in their background research (Woolfe et al.,
2009). Although the study was conducted primarily as an initial step to further research in
the field, the authors noted that there was no demographic information available on the
Deaf population in the UK so it was impossible to determine how representative their
sample size is (Woolfe et al., 2009). They also noted that the sample was all “ethnically
British white” which is at odds with the ethnic diversity of the UK (Woolfe et al., 2009, p.
329). A further potential drawback of this study is the possible bias of parents. In order to
obtain longitudinal data parents were sent a new form as well as a copy of their previous
form on a quarterly basis. This allows for the possibility that: 1) parents coached their
children to achieve milestones on the form, for instance repeatedly and selectively reading
them stories about animals to assist on word identification around animals; 2) parents may
not have given completely honest answers if they noticed no growth between quarters
causing feelings of inadequacy. Both of these outcomes would skew the results, especially in
such a small research sample. To counter this, researchers selected ten children to visit
during the project to film interacting with their parents a day after the CDI was filled out.
These CDI’s were then compared to evidence coded from the video (Woolfe et al., 2009). It
is doubtful that this validity measure would have been effective in identifying coached
children. The unreliability of participant data sets (such as the longitudinal approach seen in
the Woolfe et al. (2009) study) was highlighted by Cawthon (2009) in her research on
assessment practices for SDHH.
In her study, Cawthon (2009) created a set of three vignettes which she sent to 373
professionals working in Deaf education. In addition to checking appropriate boxes on the
Vignette, participants were also asked to provide qualitative justifications/motivations for
their recommendations (see Figure 1). Cawthon noted that while there is empirical research
4
5. 101656 Researching Practice Tess Jabbour
surrounding the validity of accommodations on test scores there is no solid theory around
motivations for educational decision making, specifically which factors justify decisions to
adopt a particular accommodation (Cawthon, 2009). This further justifies the need for
Cawthon’s investigation. Participants were encouraged to detail their decision-making
thought process making it increasingly difficult for them to justify bias or unethical decisions
in their responses. For instance, a teacher who would normally select ‘extended time’ as an
accommodation because that’s what they have always done or it is easier than organising
for a scribe may reconsider their decision if they have to justify it. This leads to more
ethically sound, student focused outcomes. It also avoids the satisficing rule in which Byron
(as cited in Cawthon, 2009) suggests that participants may not look through the entire range
of possibilities for the best solution but rather settle on the first one that meets an adequate
standard.
The vignettes in Cawthon’s investigation had randomly controlled conditions including: test
subject, student skill level and language used in instruction (sign or oral). Figure 1 is an
example of a vignette used in the study.
Figure 1 - Condition: Low Reading, High Math Skills
Unlike the Woolfe et al. (2009) study which was limited and open to bias by having
respondent’s measure individual and specific students, approaching the investigation in this
mode allowed Cawthon to answer wider research questions around her subject of interest.
However, this can also be a limitation in that it may be difficult for teachers to give
consistent answers without an actual student in mind. Like the Woolfe et al. (2009) study
Cawthon took steps to maintain the validity of her data. 22 per cent of her study
participants were surveyed a second time on the identical scenario several months later.
5
6. 101656 Researching Practice Tess Jabbour
Cawthon found that it was common for her retest participants to approach the task
differently each time (Cawthon, 2009). Reasons for this are difficult to ascertain due to the
national nature of the vignette survey model which meant participants were usually not
local to the research and did not have an option to obtain additional information. Likewise
the researcher was unable to clarify or get an extension on qualitative answers. One of the
key findings of Cawthon’s study was that key recommendations varied by test subject (math
or reading) and student skill level but not by communication mode (being either sign
language or sign language and speech together) (Cawthon, 2009). Cawthon (2009) suggests
that this may be due to the limited number of students assumed to be proficiently fluent in
American Sign Language. Not surprisingly interpreting individual test items was the most
controversial (and a lesser selected) accommodation as it involved a non-standardised
translation of the test item (due to the grammatical deviations between sign languages and
oral languages) (Cawthon, 2009). It should be noted that from this study the two most
common accommodations selected (when averages of all vignettes were tallied) were: 1)
test directions interpreted (81 per cent); 2) extra time (73 per cent). Teachers were
reluctant to provide alternate assessments or student signed responses. When analysing
response justifications Cawthon, noted participants focused largely on two areas: 1)
communication mode of instruction; 2) academic performance relative to year level
(Cawthon, 2009). The focus on communication mode of instruction suggests reasons as to
why teachers may resist a signed language modification in a testing situation considering
that the majority of students in the study were instructed in (oral) English. This is relevant in
an Australian context considering the high percentage of SDHH who are educated in
mainstream schools in a primarily oral language setting.
3.0 – Accommodations, NAPLAN and Standardised Testing in Students who are Deaf or
Hard of Hearing
The NAPLAN is an annual standardised assessment for Australian students in years 3, 5, 7
and 9. The assessment individually tests skills across the four key learning areas of reading
(of written English), writing (examining genres and text types), language conventions
(English: spelling, grammar and punctuation) and numeracy (ACARA, 2013a). In addition to
monitoring individual student growth (over 2 years), the data provided by NAPLAN also
provides individual school tracking which is publically available on the My School website.
This school-by-school tracking provides averages of each school’s achievements in the key
tested learning areas and compares them to other schools across the country as well as to
other schools with similar characteristics (ACARA, 2015). While the My School website
provides information around students with an English Second Language (ESL) background, it
does not provide data around the number of students with disabilities sitting the test
(Elliott, Davies & Kettler, 2012)
6
7. 101656 Researching Practice Tess Jabbour
As with all forms of standardised testing, the NAPLAN serves to ensure that students are
meeting key educational outcomes (ACARA, 2013b). Such tests act not only as a driver for
improvement but also as an accountability measure through the provision of nationally
comparable data (ACARA, 2013b). According to Cawthon, “For SDHH, meaningful
participation in accountability reform depends on assessment practices that take their
linguistic and academic backgrounds into consideration” (2009, p. 4). This means that simply
asking SDHH to participate in standardised testing without due consideration to their ability
to access content becomes a pointless activity. In her study (see 2.0) Cawthon (2009) makes
reference to the lack of national standards for providing testing modifications to SDHH in
America. This lack of cohesion can also be seen in an Australian context. Deaf Australia
(2014) note that the participation of SDHH in NAPLAN has been low since it came into
existence in 2008. This is attributed to two alarming factors. The first is that often there is
minimal support to assist SDHH to access the test. The second is that schools actively
discourage students from participating believing their results will cause the average score of
the school (publically available on the My School website) to plummet. These two factors
highlighted are in direct contrast to the UN Convention on the Rights of Persons with
Disabilities which states, “States Parities recognise the right of persons with disabilities to
education…In realising this right, States Parities shall ensure that: c) Reasonable
accommodation of the individual’s requirement is provided; d) Persons with disabilities
receive the support require, within the general education system…” (United Nations, 2006).
Australian SDHH are not getting fair and equitable, if any, access to the NAPLAN and are
therefore not being accurately benchmarked. According to Elliott et al. (2012), around 5 per
cent of students are withdrawn or exempt from NAPLAN each year instead of having
accommodations put in place. The data around the reasons for these exemptions is not
collected. This figure is in contrast with American standardised testing models which
requires even those students with severe cognitive disabilities (around 1 per cent of the
student population) to participate in standardised testing. In this instance the test may be
an alternate assessment so long as it can “yield results separately in both reading/language
arts and mathematics” (Elliott et al., 2012, p. 10) and the results can indicate adequate
yearly progress (Elliott et al., 2012). Deaf Australia (2014) has highlighted a clear need for
research into inclusive and accessible approaches to standardised tests for SDHH.
The overriding purpose of NAPLAN is to provide a summative snapshot of the capabilities of
a student based on a particular tested skill (language comprehension, writing, numeracy or
reading). If accommodations are not carefully selected SDHH may either under or over
perform which does not provide their families and educators with an accurate picture of
their academic capabilities in relation to the tested skill. Test accommodations need to allow
SDHH to demonstrate the skill being tested to the best of their ability without giving an
unfair advantage over their peers. Cawthon (2009) noted that a valid accommodation will
improve the testing score of SDHH but not the scores of students without a disability. This,
however, is at odds with the research findings from Cawthon’s (2009) investigation which
7
8. 101656 Researching Practice Tess Jabbour
found that academic performance relative to year level was a key factor in the
accommodations selected. Very few participants in Cawthon’s (2009) study selected
‘alternate assessment’ or ‘other assessment’ as an appropriate accommodation for students
below grade level suggesting that perhaps students who were at a much lower academic
level than demands of the test required were given accommodations to allow them to
answer questions they (and a non SDHH at the same academic level) may not otherwise be
able to answer due to lack of skill rather than as a consequence of their hearing loss.
It is important to understand that the validity of an accommodation is dependent on the
purpose of a test (Isaacson, 1996). An accommodation that may be appropriate for a math
test may not be appropriate for a language convention test. For instance, a scribe may be
able to translate a math question from Auslan to English and still allow the student to show
his/her mathematical abilities. This would not work in a situation where the student was
required to correctly structure simple or compound sentences in English. In her
investigation into teacher perspectives on appropriate accommodations (see 2.0). Cawthon
(2009) found that the subject of the test was a key driver in the selection of appropriate
accommodations. ACARA’s National protocols for (NAPLAN) test administration also allows
room for different adjustments for different tests, “A student may have access to more than
one adjustment in any one test and different adjustments may be appropriate for different
tests” (2015, p.15). In the aforementioned study (see 2.0) Woolfe et al. (2009) found that
receptive vocabulary development outpaced expressive vocabulary development. Although
the study looked at Deaf and Hard of Hearing children pre-schooling, its results add weight
to the suggestion that accommodations should not only be individualised to the student but
also to the particular NAPLAN key learning area test.
Some of the most common accommodations afforded to SDHH include:
3.1 Extended Time
Extended time refers to giving a student additional time to complete an assessment. It is
often seen as a desirable accommodation as it does not alter the content of the test. In
Cawthon’s (2009) study (see 2.0) extended time was one of the most common
accommodations recommended by educators of SDHH. Supporting Cawthon’s (2009)
findings, Elliott et al. (2012) also found that extended time was a highly beneficial
accommodation. Their review of a variety of investigations on the benefits of extended time
for students with disabilities found that five out of eight studies reported extra time
correlating to higher scores in students with disabilities.
3.2 Interpreter for Instructions
This refers to the student being provided with an interpreter to translate the instructions of
the test, but not the content, from a written/oral language to a signed language. This can be
done via video or in person. This can be helpful in allowing the student to translate oral
8
9. 101656 Researching Practice Tess Jabbour
instructions given about the rules/requirements of the test and was a popular choice among
the educational professionals recommending accommodations in Cawthon’s (2009) study.
81 per cent of these participants recommended this as a suitable testing accommodation for
SDHH (Cawthon, 2009). However, according to Power, Hyde & Leigh the verbatim
translation from speech to sign and vice-versa can be unintelligible to a SDHH because it
“violates the naturally occurring visual and movement structures of natural sign languages”
(2008, p. 38).
3.3 Interpreter for Questions
In this situation SDHH are provided with an interpreter to translate the questions directly
from written/oral languages to a signed language. There was some hesitation in
recommending this across the studies analysed. Woolfe et al. (2009) noted lack of matchup
between signs and existing lexical categories found in English such as noun/verb pairs.
Cawthon’s study participants were also hesitant to suggest an interpreter for questions, “…
unless the student is proficient in ASL or a signed communication system, translating test
items may not be beneficial for the student” (Cawthon, 2009, p. 12). Glaser & Van Pletzen
(2012) noted the time lag that comes with accurately interpreting between oral and signed
languages. This suggests that if this accommodation was applied it would be wise to also
allow for extended time.
3.4 Scribe
For SDHH the use of a scribe generally involves them reading test items and then signing
their answers to the scribe who conveys this verbatim onto the testing sheet. Problems can
arise around the structural disparities between signed language and oral/written languages
which have been identified and discussed in the aforementioned studies (Woolfe et al.,
2009; Cawthon, 2009; Glaser & Val Pletzen, 2012). However, in Cawthon’s (2009) study over
a third of educational professionals surveyed suggested that they would consider this as an
appropriate accommodation, especially if it was a student’s dominate form of
communication.
3.5 Alternate Assessment
An alternate assessment is generally seen as a last resort accommodation and is often only
administered to students who have a disability and are well below grade level across several
areas (Cawthon, 2009). Such assessments are not usually administered to SDSS unless they
have additional disabilities (Cawthon, 2009) or in cases where changes made by adopting an
accommodation put the validity of the test into jeopardy (Cawthon, 2008). Elliott et al.
(2012) found that in the most extreme cases alternate assessments could be beneficial so
long as they were aligned with content standards. They referenced three key types of
alternate assessment a) portfolio work comprising of a collection of student generated
work; b) performance assessment requiring a child to perform a set of tasks to elicit a
9
10. 101656 Researching Practice Tess Jabbour
response or generate a product; c) rating scale comprised of teacher judgement of
knowledge and skills evidently shown on a repeated basis.
4.0 – Conclusions
As with any social science research undertaken around human reactions and responses it is
difficult to draw absolute conclusions about the most suitable accommodations required for
SDHH to participate in NAPLAN. As was shown in Cawthon’s (2009) study participant
responses altered over time. Easton-Brooks noted that “What further complicates the law of
human response is that knowledge is very relevant and is based on cultural and historical
experiences” (2012, p. 33). This suggests that not only is it unique to a point in time but also
that research needs to be done specifically in relation to the Australian experience in the
education system. Currently close to 5 per cent of Australian students do not sit the
NAPLAN. There is little information around the reasons for this or the ways in which it can
be remedied, with accommodations, in an Australian context. What is clear is that SDHH are
at a linguistic disadvantage to their hearing peers and this has impacts from childhood right
through to tertiary education. Without access to standardised testing these students are
less easily able to be benchmarked and have appropriate interventions put in place.
Although often fluent in sign language, the grammatical contrast between sign and oral
languages presents a challenge for educators when deciding how to best engage these
students in standardised testing with appropriate accommodations.
Reference List
ACARA. (2013a). NAP, Retrieved February 10, 2015, from http://www.nap.edu.au/
ACARA. (2013b). Why NAP, Retrieved February 6, 2015, from
http://www.nap.edu.au/about/why-nap.html
ACARA (2014). My School, Retrieved 10 February 2015, from http://www.myschool.edu.au/
ACARA (2015). 2015 National protocols for test administration. Retrieved from
http://www.nap.edu.au/naplan/school-support/national-protocols.html
Aarons, A. & Akach, PAO. (2002). Inclusion and the Deaf child in South Africa. Perspectives in
Education. 20(1), 153-170.
Byron, M. (2005). Simon’s revenge: Or, incommensurability and satisficing. Analysis, 65(4),
311-315.doi: 10.1093/deafed/enn027
Cawthon, S. W. & Wurtz, K. A. (2008). Alternate assessment use with students who are deaf
or hard of hearing: an exploratory mixed-methods analysis of portfolio, checklists,
10
11. 101656 Researching Practice Tess Jabbour
and out-of-level test formats. Journal of Deaf Studies and Deaf Education, 14(2), 155-
177.
Cawthon, S. W. (2009). Making decisions about assessment practices for students who are
deaf or hard of hearing. Remedial and Special Education, 32(4), 4-19. doi:
10.1177/0741932509355950
Deaf Australia. (2014). Policy on the National Assessment Program – Literacy and Numeracy
(NAPLAN) for Deaf People in Australia. Retrieved from
http://deafaustralia.org.au/wp-content/uploads/DA-Policy-on-NAPLAN-Sept-
2014.pdf
Easton-Brooks, D. (2012). The conceptual context of knowledge. In Steinberg, S. R. &
Cannella, G. S (Ed.), Critical qualitative research reader (pp. 33-42). New York: Peter
Lang Publishing.
Elliott, S. N., Davies, M. & Kettler, R. J. (2012). Australian students with disabilities accessing
NAPLAN: Lessons from a decade of inclusive assessment in the United States.
International Journal of Disability, Development and Education, 59(1), 8-19. doi:
10.1080/1034912X.2012.654934
Glaser, M. & Van Pletzen, E. (2012). Inclusive education for Deaf students: Literacy practices
and South African Sign Language. Southern African Linguistics and Applied Language
Studies, 30(1), 25-36. doi: 10.2989/16073614.2012.693707
Herman, R. & Roy, P. (2000). The influence of child hearing status and type of exposure to
BSL on BSL acquisition. Proceedings of the 1999 Child Language Seminar, City
University, London, 1, 1167-122).
Hyde, M. (2004). Inclusion of deaf students: An examination of definitions of inclusion in
relation to findings of a recent Australian study of deaf students in regular classes.
Deafness and Education International, 6(2), 82-95.
Isaacson, S. L. (1996). Informal written-language assessment procedures. Simple ways to
assess deaf or hard-of-hearing students’ written skills. Volta Review, 98(1), 183-189.
Johnston, C. J. & Thurlow, M. L. (2012). Statewide testing of reading and possible
implications for students with disabilities. The Journal of Special Education, 46(1), 17-
25. doi: 10.1177/0022466910371984
Power, D. Hyde, M. & Leigh, G. (2008). Learning English from signed English: an impossible
task?. American Annals of the Deaf, 153(1), 37-47. doi: 10.1353/aad.0.0008
United Nations. (2006). Convention on the Rights of Persons with Disabilities, Retrieved
February 10, 2015, from
http://www.un.org/disabilities/convention/conventionfull.shtml
Van Vliet, P. (2011). The census in a multicultural Australia. Australian Government
Department of Immigration and Citizenship. Retrieved February 5, 2015, from
http://www.google.com/url?
sa=t&rct=j&q=&esrc=s&source=web&cd=1&ved=0CB8QFjAA&url=http%3A%2F
%2Fblog.abs.gov.au%2FBlog%2Fbeyondthecount.NSF%2Fdx%2FPetervanVliet.ppt
%2F%24file
%2FPetervanVliet.ppt&ei=w5fSVPi2Loi68gW484GgDQ&usg=AFQjCNFxRe5Lw_3S-
hJSUxDGykT9EPsUVw&sig2=ARQAqeW73tChCLvdaK1Pmg&bvm=bv.85142067,d.dGc
Woolfe, T., Herman, R., Roy, P., & Woll, B. (2009). Early vocabulary development in deaf
native signers: a British Sign Language adaptation of the communicative
development inventories. The Journal of Child Psychology and Psychiatry, 51(3), 322-
331. doi: 10.1111/j.1469-7610.2009.02151
11