NeuroRehabilitation 32 (2013) 185–190 DOI:10.3233/NRE-130835 IOS Press 185 Study of accent-based music speech protocol development for improving voice problems in stroke patients with mixed dysarthria Soo Ji Kima,∗ and Uiri Job aDepartment of Music Therapy, Graduate School and Ewha Music and Rehabilitation Center, Ewha Women’s University, Seoul, Korea bPrivate Practice, Seoul, Korea Abstract. Based on the anatomical and functional commonality between singing and speech, various types of musical elements have been employed in music therapy research for speech rehabilitation. This study was to develop an accent-based music speech protocol to address voice problems of stroke patients with mixed dysarthria. Subjects were 6 stroke patients with mixed dysarthria and they received individual music therapy sessions. Each session was conducted for 30 minutes and 12 sessions including pre- and post-test were administered for each patient. For examining the protocol efficacy, the measures of maximum phonation time (MPT), fundamental frequency (F0), average intensity (dB), jitter, shimmer, noise to harmonics ratio (NHR), and diadochokinesis (DDK) were compared between pre and post-test and analyzed with a paired sample t-test. The results showed that the measures of MPT, F0, dB, and sequential motion rates (SMR) were significantly increased after administering the protocol. Also, there were statistically significant differences in the measures of shimmer, and alternating motion rates (AMR) of the syllable /Ke/ between pre- and post-test. The results indicated that the accent-based music speech protocol may improve speech motor coordination including respiration, phonation, articulation, resonance, and prosody of patients with dysarthria. This suggests the possibility of utilizing the music speech protocol to maximize immediate treatment effects in the course of a long-term treatment for patients with dysarthria. Keywords: Accent-based, music-speech protocol development, mixed types of dysarthria, stroke patients 1. Introduction Along with many complicated neurological problems that stroke patients present,weakness, incoordination or paralysis of the muscles involved in speech production are common problems after the onset of stroke (Mackenzie, 2011). A group of these speech motor problems due to neurological impairments is known as dysarthria (Duffy, 2005). Approximately 20–30% of stroke patients present dysarthria (Warlow, ∗Address for correspondence: Soo Ji Kim, Department of Music Therapy, Graduate School and Ewha Music and Rehabilitation Center, Ewha Women’s University 11-1 Daehyun-dong, Seodaemun-gu, Seoul 120-750, Korea. Tel.: +82 2 3277 6916; E-mail: specare@ewha.ac.kr. Dennis, van Gijin et al., 2000), which is characterized by slow, weak, and imprecise speech patterns with uncoordinated movements of the speech musculature (Kim, 2010). While the number of stroke patients with dysarthria is relatively large, less attention has been given to this byproduct of stroke than has been shown to aphasia, an impairment of language. Seven types of dysarthria are identified based on the damaged brain region. These include flaccid dysarthria, spastic dysarthria, ataxic dysarthria, hypokinetic and hyperkinetic dysarthria, unilateral upper motor neuron (UUMN) dysarthria, and mixed dysarthria (Darley, Aronson, & Brown, 1969; Duffy, 2005). Unilateral brain damage causes unilateral upper motor neuron dysarthria, showing imprecise articulation; while 1053-8135/13/$27.50 © 2013 – IOS Press and the authors. All rights reserved 186 S. J. Kim and U. Jo / Accent-based protocol for mixed dysarthria bilateral brain damage is associated with spastic dysarthria, indicating reduced loudness and strainedstrangled voice with short breath support. Hypokinetic and hyperkinetic dysarthria result when damage occurs in the basal ganglion motor circuits. Ataxic dysarthria is related to cerebellum damage, which results in the breakdown of motor organization and control (Duffy, 2005; Kim, 2009). Mixed dysarthria, as the name indicates, includes a mixture of two or more of any dysarthria type. When more than one type of dysarthria is present, the dominant type is determined by the location of the damage (Darley et al., 1969; Duffy, 2005). All of the above types of dysarthric speech include dysfunctions in laryngeal movements, respiration, articulation and swallowing (Baker, Ramig, Luschei, & Smith, 1998; Nagaya, Kachi, Yamada, & Igata, 1998; Leopold & Kagel, 1997). Depending on the damaged brain area, stroke patients with dysarthria may present poor articulation, breathlessness, monotone speech, or difficulties with intonation and rhythm oftheirspeech.Somesymptomscontinueafterthecompletion of rehabilitation; therefore, social interaction and communication skills remain limited (Brady, Clark, Dickson, Paton, & Barbour, 2011). Since the 1970’s, various interventions have been developed for dysarthria, yet no standardized approach has been established (Sellars, Hughes, & Langhorne, 2002). The well-known Lee Silverman Voice Treatment (LSVT) technique focused on increasing both respiratory effort and vocal fold adduction (Ramig et al., 1995), and other treatments increased subglottal air pressure for increasing inspiratory and expiratory volume (Hixon, 1973). Overall, however, patients with dysarthria need direct assistance in altering vocal quality. An extensive body of research has demonstrated the use of musical elements in enhancing speech functions (Haneishi, 2001; Kim, 2010; Magee, Brumfitt, Freeman, & Davidson, 2006; Tamplin, 2008). Additionally, the relationship between music and speech production has been investigated through brain research. Generally, a portion of the brain known as Broca’s area, in the posterior part of the inferior frontal gyrus, is acknowledged to be involved in speech production, and current research findings reveal that this area is activated by various musical tasks as well, including pitch and rhythm discrimination, pitch memory, and musical syntax (Binder et al., 1997; Koelsch, et al., 2002; Maess et al., 2001; Patel, 2003; Platel et al., 1997). Based on the similarity of brain involvement and anatomical structures between singing and speech production, it may be expected that musical elements such as rhythm, pitch, tempo, lyrics, and dynamics can be successfully incorporated into speech rehabilitation. Among these various musical elements, accent not only changes musical quality influencing meter perception (Robert & Mari, 2009), but also shapes melody and rhythmic pattern in a musical and temporal context (Boltz, 1993). In fact, accent is observed in both music and speech. One voice therapy developed using accent on vowel sounds is called Accent Method (AM), and is used primarily for acoustical improvement in speech production (Kotby, 1995). In conjunction with body movement, AM can be used for hyperfunctional and hypofunctional voice problems. Three major principles of AM are 1) optimal abdominodiaphragmatic breath support; 2) rhythmic play of accentuated relaxed vowels intended to carryover to speech, and 3) dynamic rhythmic body and arm movements (Bassiouny, 1998). AM targets functional exercises of respiration, phonation, and articulation, which then are transferred to speech. Several studies used AM to address varying degrees of voice problems (Fex, Fex, Shiromoto, & Hirano, 1994; Simberg, Sala, Tuomainen, Sellman, & Ronnemaa, 2006), yet voice dysfunction due to neurological damages has not yet been investigated. This might be due to the fact that stroke patients typically demonstrate overall lowered cognitive functions, including short attention span and decreased short-term memory, coupled with a lack of musical competency to keep a steady beat and utilize melodic and rhythmic accents for speech therapists. Previous music therapy literature has examined musical elements in dysarthric speech to regulate the speech motor function by providing rhythmic cues and temporal and melodic alterations (Haneishi, 2001; Pilon, McIntosh, & Thaut, 1998; Tamplin, 2008). Integration of accent in music applications for dysarthric speech such as singing and chanting may enhance acoustical components of speech parameters, because producing accentuated sound stimulates respiratory controls and glottal attack. Based on these previous findings and the advantages of accent, this study is to investigate the Accent-based Music Speech Protocol (AMSP) for acoustical components of speech including Maximum Phonation Time (MPT), Fundamental Frequency (F0), Decibel level (dB), Jitter and Shimmer, Noise to Harmonic Ratio (NHR), Diadochokinetic rate (DDK) in stroke patients with dysarthria. In order to develop AMSP, previous speech protocols for dysarthric speakers and the original version of AM were reviewed. S. J. Kim and U. Jo / Accent-based protocol for mixed dysarthria 187 2. Method 2.1. Subjects Six stroke patients (mean age = 58.83) with mixed dysarthria participated in this study. Due to the multiple characteristics of mixed dysarthria, patients who showed spastic components as the primary symptom were included. All subjects are diagnosed as stroke within one year prior to the study, and diagnosed as stroke with mixed dysarthria. The neurologist and a speech therapist confirmed each subject’s dysarthria type. Inclusion criteria were 1) no hearing and visual difficulties, 2) age ranges between 50 to 65, and 3) completion of written consent forms including video/audio consent forms. Researchers verbally explained the studyprotocolandansweredsubjects’questionsregarding the information in the consent form to subjects before they completed the forms. Subjects were recruited at a rehabilitation medical center in Seoul. All procedures of this study followed the research protocol of the medical center. Each subject’s characteristics are outlined in Table 1. 3. Data collection Each subject’s voice was recorded using a microphone (Logitech Desktop Microphone USB MIC) and a computer software program, PRAAT (version 5.2; Boersma & Weenink, 2007). The same distance between subjects’ mouths and the microphone was maintained by providing test trials three times. Voice recordings were performed with a laptop computer with a Pentium processor and the Logitech Desktop Microphone USB MIC. The microphone to mouth distance was fixed at 15 cm and the subject was asked to phonate /a/ vowel at least 5 times at the most comfortable level. Praat version 5.2, a computerized speech software program, was used to analyze voice recording samples. Maximum phonation time (MPT) was determined by measuring the duration of /a/ vowel produced at a comfortable amplitude and pitch level of voice after maximum inspiration. Following appropriate instructions, the procedure was repeated three times by the subject, and the recording with the longest duration was considered as the MPT. Fundamental frequency (F0) reflects the rate of vocal fold vibration and closely matches our perception of pitch. In speech, changes in F0 can reflect linguistic and affective prosody. F0, dB, Jitter, Shimmer, and NHR were estimated from the frequency and amplitude of perturbation measurement. Subjects were then asked to repeat the phonemes /pe/, /te/, and /ke/ as rapidly as possible and as long as possible at a comfortable pitch and loudness. This test is used for the clinical assessment of motor function of speech (Fimbel, Domingo, Lamoureux, & Beuter, 2005). DDK can be used as a measure of syllable timing and rhythm in speech (Kent, Weismer, Kent, Vorperian, & Duffy, 1999). Irregular syllable timing during DDK is strongly associated with ataxic dysarthria and can be associated with spastic dysarthria as well (Tjaden & Watling, 2004). DDK measures are divided into two types: alternating motion rate (AMR) and sequential motion rate (SMR). AMR is measured as the rapid repetition of a single syllable ( pe), and SMR is measured as the rapid repetition of a sequence of syllables (/pe/, /te/, /ke/). 4. AMSP protocols Based on the AM method, the researchers integrated musical elements into accentuated vocalization. The AMSP protocol consisted of four stages (see Table 2 for comparison). The first stage was warm-up (4 minutes). Subjects were asked to stretch both arms and move up and down slowly. If subject is hemiplegic, he or she was asked to hold the involved hand with the healthy hand. Table 1 Subject characteristics Subject Gender Age Dysarthria type On set MMSE-K Diagnosis 1 M 64 Mixed dysarthria 11/02/04 29 Quadriplegia secondary to Rt. PCA Lt. midbrain thalamic infarction 2 M 57 Mixed dysarthria 10/09/28 28 Lt. Hemiplegia d/t Rt. Pons infarction 3 M 52 Mixed dysarthria 11/02/06 29 Rt. Hemiplegia d/t Lt. Pons infarction 4 M 53 Mixed dysarthria 10/05/09 30 Quadriplegia secondary to pons„ medulla„ cerebellar infarction 5 F 62 Mixed dysarthria 10/09/30 26 Quadriplegia secondary to both cerebellar Rt. Brain stem infarction 6 F 65 Mixed dysarthria 11/01/20 23 Lt. Hemiplegia Rt. BG ICH MMSE-K: Korean version of MMSE, Lt: Left, Rt: Right, BG: Basal Ganglia, ICH: Intra Cerebral Hemorrhage. 188 S. J. Kim and U. Jo / Accent-based protocol for mixed dysarthria Table 2 Accent method vs. Accent-based music-speech protocol (AMSP) Stages Accent method AMSP Introduction Relaxation & voice rest Upper body stretching with music Breathing exercise Breathing exercise Breathing exercise with a hand drum Phonatory exercise 1) /f/v/s/z/ 1) /f/ as long as possible 2) /a ‘a:/, /i ‘i:/, /u ‘u:/ with largo tempo 2) /a/ with descending glissando 3) /a ‘a’a’a:/, /i ‘i’i’i:/, /u ‘u’u’u:/ with Andante tempo 3) /a/e/i/o/u/ with singing 4) /a ‘a’a’a’a’a:/, /i ‘i’i’i’i’i:/, /u ‘u’u’u’u’u:/ with Allegro tempo 4) /a ‘a:/, /i ‘i:/, /u ‘u:/ vocalization = 50∼60, three beats 5) /a ‘a’a’a:/, /i ‘i’i’i:/, /u ‘u’u’u:/ vocalization = 70∼80, four beats 6) /a ‘a’a’a’a’a:/, /i ‘i’i’i’i’i:/, /u ‘u’u’u’u’u:/ vocalization = 90∼130, four beats Transferring Reading, monologue, or dialogue Melodic chanting with accents Trunk rotation and head movements were followed. All movements were accompanied with steady and sedative music with four beats. The second stage was respiratory training (4 minutes). Subjects were asked to breath in and out with their hands on their abdomens following the researcher’s demonstration. This stage was accompanied by a keyboard scale played with ascending and descending melody lines. The third stage was vocalization (10 minutes). In this stage, the researcher held a hand drum and played two preparation beats, and then subjects vocalized /a/ pretending to yawn. This exercise was to facilitate opening the throat and relaxation of the muscles involved in vocalization. Subjects were then asked to sing a simple song with /a/, /e/, /i/, /o/, /u/, or combination of two of them. After completion of singing with vowel sounds, subjects were asked to vocalize the same vowel sound two times in a row, accentuating the second vowel (/a/ - >/a/:). The second accentuated vowel was longer than the first one. The researcher added more vowels according to subjects’ physical conditions. Lastly, melodic chant with accent was performed (12 minutes). Six Korean traditional chanting repertoires were selected. Subjects chanted with accents in conjunction with hand drumming on every first and third beat. Various lyrics and numbers of syllables were included. 5. Study procedures All subjects received conventional therapies including speech therapy. After the subject recruitment was complete, a total of six stroke patients with Table 3 Results of aerodynamic and acoustical data Pre mean ± SD Post mean ± SD t p MPT 7.4 ± 3.41 14.46 ± 7.80 −3.04 0.028* F0 135.92 ± 32.12 159.49 ± 43.02 −4.34 0.007* Intensity (dB) 59.60 ± 3.09 69.21 ± 2.88 −6.63 0.001* Jitter 1.60 ± 1.80 0.66 ± 0.46 1.49 0.195 Shimmer 10.26 ± 5.15 7.27 ± 3.95 3.55 0.016∗ NHR 0.21 ± 0.24 0.07 ± 0.06 1.94 0.110 *P < 0.05 mixed dysarthria completed consent forms. Pretest for aerodynamic and acoustical parameters was performed in a quiet and isolated room. A total of 10 sessions with AMSP was implemented daily for two weeks. Because all subjects were in sub-acute care in a rehabilitation medical center, the period of music therapy intervention was limited. Upon the completion of 10 sessions with ASMP, posttest was performed at the same place where the pretest was done. 6. Results Stroke patients with dysarthria received 10 AMSP sessions and results of pre- and post test were analyzed using paired t-test (p < 0.05). Regarding aerodynamic and acoustic parameters, results showed statistical significance in MPT, F0, intensity (dB), and Shimmer, but no significant results were found in Jitter and NHR. An important and significant (p < 0.001) improvement of intensity was found. Mean DDK rate showed improvement compared to mean scores of pretest; however, the difference did not reach statistical significance for /pe/ and /te/. Significance was revealed in /ke/, which is related to jaw movement (see Tables 3 and 4). S. J. Kim and U. Jo / Accent-based protocol for mixed dysarthria 189 Table 4 Results of DDK rate Pre mean ± SD Post mean ± SD t p /pe / 10.00 ± 2.73 16.50 ± 8.66 −1.89 0.117 /te/ 10.50 ± 3.28 15.16 ± 7.62 −2.24 0.075 /ke / 8.16 ± 4.35 14.16 ± 7.52 −3.35 0.020∗ /pe te ke/ 3.93 ± 1.60 5.48 ± 1.92 −4.04 0.010∗ *p < 0.05 7. Discussion In the present study, AMSP was applied to stroke patients with dysarthria to improve speech production and oral-motor function of speech. Intentional accents were integrated into steps of AMSP and hand drumming was used to facilitate subjects’ accents. Compared to the Accent Method used in speech therapy, AMSP added melodic chanting with accents and utilized musical cues for respiration control to maximize the efficacy of treatment. Patients in this study presented difficulties in speech breathing control due to the decreased respiratory functions, and alteration of length of vowels or lyrics with musical cues can directly influence an improvement in intensity. Also, using accents in melodic chants may facilitate laryngeal movement resulting in improved F0. Overall, MPT can be increased with the improvement of coordination ability in respiration and vocalization. Producing intentional accents and accentuated melodic chanting required regulated timing of inhaling and exhaling as well as vocal fold adduction. When subjects make an effort to accentuate as they produce a sound, the sequence of voice initiation may occur moreaccurately,includinginhalationandincreasedtension of vocal folds which are important to produce phonation. No statistical significance in Jitter and NHR was found due to ceiling effects. Mean Jitter and Shimmer values measured in pretest were numerically high enough for dysarthric speech; however, overall mean values were decreased after the completion of sessions. Regarding voice quality, alteration of tempo and repetition of an accentuated melodic chant may play a role in regulating laryngeal and/or respiration muscles. Despite of non-significance in Jitter and NHR scores, decreased mean scores may represent the improvement of voice quality among subjects. Improvements in articulation were observed manifested by DDK rate changes. Melodic chant in conjunction with accents requires relatively slow and exaggerated movements in oral motor muscles and structures; therefore mean scores of /ke/, which shows jaw movement, is higher than /pe/, which shows lip movement, and /te/, which shows tongue movement. It is assumed that emphasizing accents on the lyrics may lead to clear jaw movements while less attention was paid on enunciation, which requires more clear articulatory movements. Considering the steps of AMSP, it was appropriate to facilitate higher participation from stroke patients because upper body relaxation is essential for speech production among dysarthric speakers. Warm-up stages and integration of upper extremity movements during vocalization allowed patients to be prepared for the vocalization, inducing proper tension in muscles related to speech. Melodic chanting requires repetition of longer phrases and frequent accents, and the warmup and vocalization with accents encouraged patients to maintain their attention on steps. Despite the positive outcomes in this study, the absence of a control group is a weakness of the study design. During the recruitment period, stroke patients with several types of dysarthria were volunteered for the study participation; however, the protocol was implemented to patients who met the inclusion criteria. Clearly, additional investigations with stroke patients having different dysarthria types as well as additional measures are required to achieve progress on the issue of how this music protocol improves speech functions. Acknowledgments The work was supported by the Ewha Global Top 5 Grant 2011 of Ewha Women’s University, Seoul, Korea. Conflicts of interest There are no conflicts of interest regarding this research. References Baker, K. K., Ramig, L. O., Luschei, E. S., & Smith, M. E. (1998). Thyroarytenoid muscle activity associated with hypohonia in Parkinson’s disease and aging. Neurology, 51(6), 1592-1598. Bassiouny, S. (1998). Efficacy of the accent method of voice therapy. Folia Phoniatr Logop, 50, 146-164. Boersma, P., & Weenink, D. (2007). Praat: Doing phonetics by computer (Version 4.6.09) [Computer program]. Retrieved June 24, 2007. Boltz, M. G. (1993). The generation of temporal and melodic expectancies during musical listening. Perception and Psychophysics, 53, 585-600. 190 S. J. Kim and U. Jo / Accent-based protocol for mixed dysarthria Brady, M. C., Clark, A. M., Dickson, S., Paton, G., & Barbour, R. S. (2011). The impact of stroke-related dysarthria on social participation and implications for rehabilitation. Disability and Rehabilitation, 33(3), 178-186. Darley, F. L., Aronson, A. E., & Brown, J. R. (1969). Differential diagnostic patterns of dysarthria. Journal of Speech and Hearing Research, 12, 246-269. Duffy, J. R. (2005). Motor speech disorders: Substrates, differential diagnosis and management, 2nd edition. Saint Louis: C.V. Mosby. Fex, B., Fex, S., Shiromoto, O., Hirano, M. (1994). Acoustic analysis of functional dysphonic before and after voice therapy (Accent method). Journal of Voice, 8, 163-167. Fimbel, E. J., Domingo, P. P., Lamoureux, D., & Beuter, A. (2005). Automatic detection of movement disorders using recordings of rapid alternating movements. Journal of Neuroscience Methods, 146, 183-190. Haneishi, E. (2001). Effects of a music therapy voice treatment on speech intelligibility, vocal acoustic measures, and mood of individuals with Parkinson’s disease. Journal of Music Therapy, 38(4), 273-290. Hixon, T. J. (1973). Respiratory function in speech. In F. Minifle, T. Hixon, & F. Williams (Eds.). Normal aspects of speech, hearing, and language (pp. 73-125). Englewood Cliffs, NJ: Prentice-Hall. Kent, R., Kent, J. F., Vorperian, H. K., & Duffy, J. R. (1999). Acoustic studies of dysarthric speech: Methods, progress, and potential. Journal of communication disorders, 32, 141-186. Kim, H. H. (2009). Neuroanatomy for speech-language pathology. Seoul: Sigmapress. Kim, S. J. (2010). Music therapy in stroke rehabilitation. Vascular Neurology, 1(1), 33-37. Kotby, M. H. (1995). The accent method of voice therapy. San Diego: Singular Publishing Group. Kolesch, S., Gunter, T. C., Cramon, D. Y., Zysset, S., Lohmann, G., & Friederici, A. D. (2002). Bach speaks: A cortical “languagenetwork” serves the processing of music. NeuroImage, 17, 956- 966. Leopold, N. A., & Kagel, M. C. (1997). Laryngeal deglutition movement in Parkinson’s disease. American Academy of Neurology, 48(2), 373-375. Mackenzie, C. (2011). Dysarthria in stroke: A narrative review of its description and the outcome of intervention. International Journal of Speech-Language Pathology, 13(2), 125-136. Magee, W. L., Brumfitt, S. M., Freeman, M., & Davidson, J. W. (2006). The role of music therapy in an interdisciplinary approach to address functional communication in complex neuro-communication disorders: A case report. Disability and Rehabilitation, 28(19), 1221-1229. Maess, B., Koelsch, S., Gunter, T. C., & Friederici, A. D. (2001). Musical syntax in processed in Broca’s area: An MEG study. Nature and Neuroscience, 4(5), 540-545. Nagaya, M., Kachi, T., Yamada, T., & Igata, A. (1998). Videofluorographic study of swallowing in Parkinson’s disease. Dysphagia, 13, 95-100. Patel, A. D. (2003). Language, music, syntax and the brain. Nature Neuroscience, 6, 674-681. Platel, H., Price, C., Baron, J. C., Wise, R., Lambert, Frackowiak, R. S., Lechevaier, B., & Eustache, F. (1997). The structural components of music perception. A functional anatomical study. Brain, 120(2), 229-243. Pilon, M. A., McIntosh, K. W., & Thaut, M. H. (1998). Auditory vs. visual speech timing cues as external rate control to enhance verbal intelligibility in mixed spastic-ataxic speakers: A pilot study. Brain Injury, 12(9), 793-803. Ramig, L. O., Countryman, S., & Thompson, L. L. (1995). The Lee Silverman Voice Treatment (LSVT): A practical guide to testing the voice and speech disorders in Parkinson’s disease. Iowa City, IA: National Center for Voice and Speech. Sellars, C., Huges, T., & Langhorne, P. (2002). Speech and language therapy for dysarthria due to nonprogressive brain damage: A systematic Cochrane review. Clinical Rehabilitation, 16, 61-68. Simberg, S., Sala, E., Tuomainen, J., Sellman, J., Ronnemaa, A. -M. (2006). The effectiveness of group therapy for students with mild voice disorders: A controlled clinical trial. Journal of Voice, 20, 95-102. Tamplin, J. (2008). A pilot study into the effect of vocal exercises and singing on dysarthric speech. Neurorehabilitation, 23, 207-216. Tjaden, K., & Watling, E. (2004). Rate and loudness manipulations in dysarthria: Acoustic and perceptual findings. Journal of Speech, Language, and Hearing Research, 47, 766-783. Warlow, C. P., Dennis, M. S., van Gijn, J., et al. (2000). Stroke: A practical guide to management. 2nd edition, Oxford: Blackwell Scientific. Copyright of NeuroRehabilitation is the property of IOS Press and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission. However, users may print, download, or email articles for individual use.