Monday, April 20, 2015

Plotting Factor Analysis Results

A recent factor analysis project (as discussed previously here, here, and here) gave me an opportunity to experiment with some different ways of visualizing highly multidimensional data sets. Factor analysis results are often presented in tables of factor loadings, which are good when you want the numerical details, but bad when you want to convey larger-scale patterns – loadings of 0.91 and 0.19 look similar in a table but very different in a graph. The detailed code is posted on RPubs because embedding the code, output, and figures in a webpage is much, much easier using RStudio's markdown functions. That version shows how to get these example data and how to format them correctly for these plots. Here I will just post the key plot commands and figures those commands produce. 

Aphasia factors vs. subtypes

One of the interesting things (to me anyway) that came out of our recent factor analysis project (Mirman et al., 2015, in press; see Part 1 and Part 2) is a way of reconsidering aphasia types in terms of psycholinguistic factors rather than the traditional clinical aphasia subtypes.

The traditional aphasia subtyping approach is to use a diagnostic test like the Western Aphasia Battery or the Boston Diagnostic Aphasia Examination to assign an individual with aphasia to one of several subtype categories: Anomic, Broca's, Wernicke's, Conduction, Transcortical Sensory, Transcortical Motor, or Global aphasia. This approach has several well-known problems (see, e.g., Caplan, 2011, in K. M. Heilman & E. Valenstein (eds) Clinical Neuropsychology, 5th Edition, Oxford Univ. Press, p. 22 - 41), including heterogeneous symptomology (e.g., Broca's aphasia is defined by co-occurrence of symptoms that can have different manifestations and multiple, possibly unrelated causes) and the relatively high proportion of "unclassifiable" or "mixed" aphasia cases that do not fit into a single subtype category. And although aphasia subtypes are thought to have clear lesion correlates (Broca's aphasia = lesion in Broca's area; Wernicke's aphasia = lesion in Wernicke's area), this correlation is weak at best (15-40% of patients have lesion locations that are not predictable from their aphasia subtype). 

Our factor analysis results provide a way to evaluate the classic aphasia syndromes with respect to data-driven performance clusters; that is, the factor scores. Our sample of 99 participants with aphasia had reasonable representation of four aphasia subtypes: Anomic (N=44), Broca's (N=27), Conduction (N=16), and Wernicke's (N=8); 1 Global and 3 TCM are not included here due to small sample size. The figure below shows, for each aphasia subtype group, the average (+/- SE) score on each of the four factors. Factor scores should be interpreted roughly like z-scores: positive means better-than-average performance, negative means poorer-than-average performance.


Credit: Mirman et al. (in press), Neuropsychologia
At first glance, the factor scores align with general descriptions of the aphasia subtypes: Anomic is a relatively mild aphasia so performance was generally better than average, participants with Broca's aphasia had production deficits (both phonological and semantic), participants with Conduction aphasia had phonological deficits (both speech recognition and speech production), and Wernicke's aphasia is a more severe aphasia so these participants had relatively impaired performance on all factors that was particularly pronounced for the semantic recognition factor. However, these central tendencies hide the tremendous amount of overlap among the four aphasia subtype groups for each factor. This can be seen in the density distributions of exactly the same data:
As one example, consider the top left panel: the Wernicke's aphasia group clearly had the highest proportion of participants with poor semantic recognition, but some participants in that group were in the moderate range, overlapping with the other groups. Similarly, the other panels show that it would be relatively easy to find an individual in each subtype group who violates the expected pattern for that group (e.g., a participant with Conduction aphasia who has good speech recognition). This means that the group label only provides rough, probabilistic information about an individual's language abilities and is probably not very useful in a research context where we can typically characterize each participant's profile in terms of detailed performance data on a variety of tests. Plus, as our papers report, unlike the aphasia subtypes, the factors have fairly clear and distinct lesion correlates.

In clinical contexts, one usually wants to maximize time spent on treatment, which often means trying to minimize time spent on assessment and a compact summary of an individual's language profile can be very useful. Even so, I wonder if continuous scores on cognitive-linguistic factors might provide more useful clinical guidance than an imperfect category label.


ResearchBlogging.org Mirman, D., Chen, Q., Zhang, Y., Wang, Z., Faseyitan, O.K., Coslett, H.B., & Schwartz, M.F. (2015). Neural Organization of Spoken Language Revealed by Lesion-Symptom Mapping. Nature Communications, 6 (6762), 1-9. DOI: 10.1038/ncomms7762.
Mirman, D., Zhang, Y., Wang, Z., Coslett, H.B., & Schwartz, M.F. (in press). The ins and outs of meaning: Behavioral and neuroanatomical dissociation of semantically-driven word retrieval and multimodal semantic recognition in aphasia. Neuropsychologia. DOI: 10.1016/j.neuropsychologia.2015.02.014.

Friday, April 17, 2015

Mapping the language system: Part 2

This is the second of a multi-part post about a pair of papers that just came out (Mirman et al., 2015, in press). Part 1 was about the behavioral data: we started with 17 behavioral measures from 99 participants with aphasia following left hemisphere stroke. Using factor analysis, we reduced those 17 measures to 4 underlying factors: Semantic Recognition, Speech Production, Speech Recognition, and Semantic Errors. For each of these factors, we then used voxel-based lesion-symptom mapping (VLSM) to identify the left hemisphere regions where stroke damage was associated with poorer performance. 

Thursday, April 16, 2015

Mapping the language system: Part 1

My colleagues and I have a pair of papers coming out in Nature Communications and Neuropsychologia that I'm particularly excited about. The data came from Myrna Schwartz's long-running anatomical case series project in which behavioral and structural neuroimaging data were collected from a large sample of individuals with aphasia following left hemisphere stroke. We pulled together data from 17 measures of language-related performance for 99 participants, each of those participants was also able to provide high-quality structural neuroimaging data to localize their stroke lesion. The behavioral measures ranged from phonological processing (phoneme discrimination, production of phonological errors during picture naming, etc.) to verbal and nonverbal semantic processing (synonym judgments, Camel and Cactus Test, production of semantic errors during picture naming, etc.). I have a lot to say about our project, so there will be a few posts about it. This first post will focus on the behavioral data.