查看原文
其他

人物专栏 | Karen Emmorey教授访谈

点击上方蓝字关注我们


编者按

《理论语言学五道口站》(2022年第32期,总第235期)“人物专栏”与大家分享近期Stephen Wilson副教授Karen Emmorey教授的访谈。Stephen Wilson,美国范德堡大学医学中心副教授。Karen Emmorey,美国圣地亚哥州立大学教授。


本期访谈节选自Stephen Wilson副教授于2021年4月与Karen Emmorey教授所做的一期播客。在访谈中,Karen Emmorey教授Language, Cognition, and the Brain: Insights From Sign Language Research一书首先回答了手语与口语之间音系、句法方面的共性,其次对手语相关神经机制进行阐述,并对手语失语症相关脑科学研究进行了探讨。访谈内容转自网站:https://langneurosci.org/podcast/,由本站成员赵欣宇、雷晨、聂简荻、郭思源、丁子意翻译。


采访人物简介

Karen Emmorey教授


Karen Emmorey,美国圣地亚哥州立大学言语、语言与听觉科学系教授。她的主要研究兴趣为手语、美国手语以及认知心理学。她的手语研究融合了不同学科,如听力学、感知、人工智能和自然语言处理等。她所著Language, Cognition, and the Brain: Insights From Sign Language Research一书,通过研究手语和使用手语的聋人,阐明了对人类语言、认知以及大脑的认识。


Brief Introduction of Interviewee


Karen Emmorey is Professor of School of Speech, Language and Hearing Sciences at San Diego State University. Her scientific interests lie mostly in Sign language, American Sign Language and Cognitive psychology. Her Sign language study integrates concerns from other disciplines, such as Audiology, Perception, Artificial intelligence and Natural language processing. Her book Language, Cognition, and the Brain: Insights From Sign Language Research illustrates what can be learned about human language, cognition, and the brain by studying signed languages and the deaf people who use them.  


采访者简介

Stephen Wilson副教授


Stephen Wilson,美国范德堡大学医学中心副教授。他的研究兴趣主要为语言的神经基础,侧重于大脑的语言处理机制(尤其是不同类型的失语症患者)。


Brief Introduction of Interviewer


Stephen Wilson is an Associate Professor at Vanderbilt University Medical Center, USA. His primary research interests are in the neural basis of language, focusing on language processing mechanisms in the brain (especially in patients with different kinds of aphasia).



访谈内容


01.

Stephen Wilson副教授:我想谈一谈美国手语(American Sign Language, ASL)和一般手语的结构以及神经方面的关联。就结构而言,手语和口语虽然不完全相同,但在很多方面都是相似的,这点十分有趣。所以我们能谈谈(手语)音系学吗?尽管您的书(Language, Cognition, and the Brain: Insights From Sign Language Research)已出版20年了,但我认为现在看来它也是极具价值的,因为它详细地介绍了这个领域及相关研究。如果有人想了解更多相关知识,我一定会推荐这本书。在您的书中,您谈到了关于(手语)音系学定义的探讨及其中原因。您最近对这个问题有什么看法?


Karen Emmorey教授:我认为将其命名为(手语)音系学是明智的选择。早期的提议,比如手语韵律学等,都听起来格格不入。我之所以认为手语音系学的命名很合理,是因为我认为这符合人类语言的本质。事实上,我们的结构是基于形式的,而且在韵律单元上可以看到明显的相似。这些都只是基于形式的模式、规则和结构。认识到这一点非常重要,即使有着一套完全不同的发音器官和感知系统,人类语言也是以一样的方式工作的。因此,手语和口语的相似之处,不仅体现在语言结构上,而且体现在大脑处理语音结构的可能区域上,而后者才是真正有趣的。当然,它们之间的差异也非常引人入胜。人们看到口语中有更多的序列结构是因为嘴是发音更快,而听觉系统擅长感知这种差异。手语有更多的平行结构、同步结构。手是相比于嘴输出更慢的发音器官,需要将大量信息分层。当然,还有很多值得研究,如与使用手和不同空间位置这一神经加工过程相比,我们是否知道与语音发音器官或听觉加工相关的神经模式?这样就可以了解什么是特定于口语,什么是特定于手语的。


02.

Stephen Wilson副教授:口语中有按顺序排列的音位,而在手语中则有多种方式。我认为其中一个非常显著的方式是手形。然后才是动作和位置?这些都是基本要素吗?


Karen Emmorey教授:这些是我们在组合的时候要考虑的基本参数。显然,组合是必需的。比如手的滑动其实就是滑移的参数。我们可以用一个手型来代替另一个,或者用一个位置或动作来代替另一个。再如,指尖也是一种参数。手势与语义并非直接相关,即使知道含义,也无法做出对应的手势。


03.

Stephen Wilson副教授:请问使用手语的人对手语形式是否像对英语口语那样有一定了解呢?

 

Karen Emmorey教授:是的,这就是为什么我们会看到人们倾向于回忆起始的手形或位置,而非具体动作。有趣的是,我们提出手形和位置通常是同时出现的。至少人们经常如此认为。手语中的起始动作就相当于口语中的第一个音素。更难回忆起的是具体动作,因为这涉及到时间跨度。


04.

Stephen Wilson副教授:语与字符串或音素组合无关,因为手语中的词几乎都只有一个音节。所以手语实际上是不同层级平行的,而不是将无意义的单位线性排列,请问是这样的吗?

      

Karen Emmorey教授:最初人们认为没有线性结构,一切都是同时进行的。后来,Scott Liddell教授和其他人表明一些手势具有线性结构:位置-动作-位置。所以手语中确实存在线性结构,只是远不及口语中丰富。


05.

Stephen Wilson副教授:在我看来,句法是语言中最有趣的部分。您可以谈谈手语和口语之间有哪些显著的相同点和不同点吗?


Karen Emmorey教授:有力证据表明,手语与口语在短语结构、组合以及创建短语类型方面非常相似,如短语结构的类型、对指代语的限制、以及短语创建和基本句法处理的神经基础等。这种相似性也引起对其差异性的探究。如,空间是如何应用于共指关系的,以及代词、动词是如何表示谁对谁做了什么的。这确实与口语不同。口语中往往可以用词缀来标记指称,或者标记动词指称。这些都可以储存在词库中,可以使用这些语法词素,它们都具有不同的功能。而对于手语来说,如果要叙述空间方位,就会把代词或者动词指向那个空间位置,以表示叙述“对象”。而那个空间位置本身并不算是真正的语言表征。手语数量之庞大才是它真正有趣的地方。手语会通过空间传达一些语义意义,如果我在话语中谈论一个个子高的人或者我的询问对象是高个子的人时,我就会做出手势指向空间中的一个高处位置传达“高个子”的意义。


06.

Stephen Wilson副教授:我认为美国手语的SVO语序与英语一样,请问这个观点对吗?


Karen Emmorey教授:是的,美国手语的基本语序是SVO。但不同的是,与英语相比,美国手语中会出现更多不同的语序。美国手语使用了大量的话题-述题结构。所以我们可以把词移到句子开头的位置作为话题,这就允许手语出现更多的语序,例如OSV语序。最初,人们认为“美国手语中没有词序”,直到Scott Liddell教授发现面部表情有话题标记。因此我们不能只是把这些词移来移去,它们必须在语言上进行标记。而一旦面部表情被认为是一种语法标记,那么人们就会意识到手语是有一个基本语序的,但会有很多语序变化,这些都可以被标记出来。


07.

Stephen Wilson副教授:我们前面已经谈到了语音学和语法。那么词库方面呢?我认为最有趣的区别之一是手语有更多潜在的象似性。您能谈谈手语中的词汇与口语相比有什么异同吗?


Karen Emmorey教授:是的,所以我们目前已经做了很多工作试图去了解手语象似性的性质。一部分口语象似性表现在拟声词中。其他一部分口语有状貌词和完整的语音象征系统,比如日语。但是与手势语言相比,口语中的象似性作用要小一些,可能是因为将发出类似于事物本身的口语比较困难。但是做出看起来类似于事物的手语则容易得多。手可以用来展示动作、呈现视觉表征、描摹形状,因此手语具有潜在的象似性。我认为口语可以做到的,用手语可以做到的就更多。


08.

Stephen Wilson副教授:现在让我们来谈谈手语的神经基础,这方面的问题Ursula Bellugi教授和她的同事在90年代就已经开始研究了。我想您也参与了其中的一些研究。您能谈谈有关手语失语症研究的发现吗?


Karen Emmorey教授:当我们考虑研究与手语相关的大脑组织时,首先想到的问题是,大脑的哪个半球参与了手语加工?在我们做功能性磁共振成像(fMRI)研究之前,我们只能通过研究脑卒中或脑损伤的失语症患者的数据发现如果大脑的左半球受到损伤,就会患上失语症;而如果大脑右半球受到损伤,虽然不会患上失语症,但可能会患有空间障碍,比如可能在寻路与空间认知方面出现问题。手语真的很有趣,因为这项研究表明手语也是一种语言,它和口语一样有着相同的语言结构,如句法、音系与形态。但是手语还用到了空间,它在每一层结构中都运用了空间。我们可以用不同的身体部位表示音系,而我们前面早已提到句法也是通过空间来表示指示对象。因此,手语可能和大脑右半球相关,因为是手语把与空间相关的信号当作媒介。不过手语也可能和大脑左半球相关,甚至是和左右两个半球都相关。患有中风的病人的研究数据表明大脑左半球受损伤会导致手语失语症,如非流利型失语症和流利型失语症;而大脑右半球受损伤则不会导致失语症。大脑右半球受损伤确实与空间障碍相关,但这一点却并没有体现在语言方面上,因为右半球损伤并没有导致失语症。因此,大脑中真正重要的其实是“语言”。左半球是语言半球,其受到损伤时会导致失语症既有可能是因为话语是与听觉相关,需要快速处理信息;也有可能是因为声道得到了运用,这就与手语类似,有着同样的组织构造。而这再一次表明了大脑的组织运行方式。


09.

Stephen Wilson副教授:那些关于失语症的研究也表明,大脑左半球中手语处理机制的分布与处理口语的机制有些类似,请问是这样的吗?


Karen Emmorey教授:基本来看,如果观测到大脑额叶损伤,那么会发现这个人可能患有非流利型失语症。大脑额叶受损伤的手语者能完全理解话语的含义,但他们却很难做出手语动作或发出声音。让人失望的是,这和口语失语症的状况一致。如果颞叶受到损伤,手语动作仍然会很流利,但却会和口语一样,出现较多语法错误与无意义的表达。


10.

Stephen Wilson副教授:在过去20年的研究中,您一定记录了手语处理和口语处理的相关神经基础的一些差异。我对正电子发射型计算机断层显像(PET)和功能性磁共振成像(fMRI)研究特别感兴趣。您能谈谈在这方面有什么重大发现吗?

Karen Emmorey教授:我一直对空间语言很感兴趣,我们应该仔细谈论下空间关系,因为这正是口语与手语的不同之处。口语中,通常会谈论身边的空间,如在谈论房间布局时,通常会使用介词,比如“on”,“under”,“around”等。然而在手语中,空间语言却并不是这么表达的。大多数谈论的物体几乎都可以运用手型来表达,无论是平面还是圆柱体。根据手型与手的位置,把手伸直或是弯曲,或是把一只手放在另一只手上面或下面,用这些方式就可以表达出我们日常所交谈的内容。以此传达这些物品的位置。这比“on”或“under”更像是一种梯度类型的表述。因此,在观察这种神经表征,或参与产生这些类型的表达的神经区域时,我们看到的是双侧顶叶参与了这些表达的产生,而这些在口语中是看不到的。口语往往涉及顶叶区域,通常在左侧,位于边缘上回。所以,要产生这些类型的表达则须利用双侧顶叶区域。然而,有趣的是,手形本身是词素的,存储在词典中。因此,特定的手形也会被用来描述弯曲的物体或一个平面。当观察参与检索分类器或手形的神经区域时,会得到语言区域。因此,当检索这些类型的表达时,左下额叶、左下颞叶都会参与其中,与这些表达的空间方面形成对比。我曾尝试将这些结构的语言方面与类似手势方面区分开来。


11.

Stephen Wilson副教授:也许这只是一个哲学问题,您是否观察到整个顶叶在手语的空间表达方面发挥作用?在您看来,这时它的参与是语言性的还是非语言性的?


Karen Emmorey教授:实际上,我认为它更像是一个认知系统。所以为了完成空间映射,就需要新建一个空间,它是空间认知和语言之间的接口。所以,是否将接口视为语言取决于个人的观点。但我认为这就是空间认知和语言结合在一起的地方,由于语言和空间认知之间的映射,手语与口语的结合方式也有所不同。


12.

Stephen Wilson副教授:您刚刚提到了左侧边缘上回。您说的是它参与了口语的产生是吗?在手语中也可以观察到边缘上回的参与以及对手部动作的控制,请问是这样吗?

 

Karen Emmorey教授:事实上,我们做实验时,就是观察不同类型手势的产生,如单手或双手姿势、身体锚定姿势、身体不同位置产生的姿势等,试图理解哪些神经区域参与了不同类型手势的产生。正是左侧边缘上回区域产生所有手势类型。因此,我们结合其他数据假设,左侧边缘上回的确参与了手语的音系组合,可能也参与了口语的部分,但不一定是完全相同的区域。该区域涉及组合手语中的抽象语音单位。


English Version


01.

Prof. Stephen Wilson: I wanted to talk a bit about the structure of American Sign Language and sign languages in general, and then kind of talk about the neural correlates. And in terms of structure, what I'm interested in is like, all of the ways in which sign languages are analogous, if not identical to spoken languages, and also the ways in which they're interestingly different. So can we talk about phonology? Your book is really great, by the way. It's 20 years old, but I think it's very current still. I'm sure there's a lot learned since then. But I think it lays out the field and this field of research effectively. So if anybody wants more of a background, I would definitely recommend that. So in your book, you kind of talk about this debate about whether it should be called phonology, right? And you kind of talk about, why would you call it phonology, why might you not. What do you think about that these days?

 

Prof. Karen Emmorey: Well, I think it was the wise choice to decide to call it phonology, as opposed to early proposals like cherology or something, making it sound really different. And the reason I think it was the right choice to talk about sign language phonology is that I think the parallels really say something fundamental about the nature of human language. The fact that we have a level of structure that is just based on form and that you can see clear parallels in terms of units, rhythmic units. These are all just form-based patterns and rules and structures. I think that's really important to recognize, that human languages work that way even if you have a completely different set of articulators and a different perceptual system. So those parallels are really what's interesting, and seeing the overlap both in linguistic structure but also in regions of the brain that might be processing the phonological structure. There are overlaps. But then of course it also makes it interesting to look at the differences. You see much more serial structure for spoken languages because the articulators are much quicker, and the auditory system is good at perceiving fast differences. Sign languages have more parallel structures, simultaneous structures. The hands are slower articulators, so you layer a lot of information. That has interesting consequences both for processing and also for linguistic structure. There are lots of other things you can kind of look at to tease apart, do we see neural patterns that are really linked to the speech articulators, or auditory processing, compared to neural processing that's really linked to the fact that you're using your hands and you're using spatial distinctions. So you can see what's specific to speech and what's specific to sign.

 

02.

Prof. Stephen Wilson: Whereas in spoken language you've kind of got these phonemes that are kind of arranged sequentially, in sign language like you mentioned there are multiple channels. So I think one really salient channel is hand shape. And then you've got movement and then location? Are those kinds of fundamental primitives?

 

Prof. Karen Emmorey: Those are the basic parameters that we look at in terms of combination. They clearly need to be assembled. So you see things like slips of the hand and those are the parameters that slip. You substitute one hand shape for another or one location or movement for another. You also see the tip of the fingers. You have a separation between meaning, you know the meaning of the sign that you want to retrieve, but you're not able to get the form, the phonology of it.

 

03.

Prof. Stephen Wilson: Do people in those circumstances come up with partial knowledge of the form like in English?

 

Prof. Karen Emmorey: Yeah, so what you see is people tend to recall the hand shape or the location. They're less likely to recall the movement. And so what's interesting is that we suggest then that the hand shape and location often occur simultaneously. At least they're perceived that way often. So that is equivalent to the onset of spoken language, the first sound. So it's kind of like the onset of the sign. The thing that is more difficult to retrieve, then, is the movement, the thing that spans over time.

 

04.

Prof. Stephen Wilson: Sign languages are not about strings or phonemes. Because you rarely have more than one syllable in a word. So it's really more about the simultaneous information on different tiers, rather than kind of linearly arranged, meaningless units, right?

 

Prof. Karen Emmorey: Yeah. Originally, it was thought there was no linear structure, everything was simultaneous. And then Scott Liddell and others came along and showed that there are signs that have a linear structure, location-movement-location. So you do have linear structure, it's just much less than for spoken languages.


05.

Prof. Stephen Wilson: I think syntax is the most interesting part of language. So do you think you could talk about what are some of the striking similarities and differences between sign languages and spoken languages?

 

Prof. Karen Emmorey: I think there's pretty good evidence that the type of phrase structure combinations and creating phrases are very similar in signed and spoken languages. So that type of phrase structure and constraints on reference, and the neural underpinnings for creating phrases and for basic syntactic processing seem to be very parallel for sign and spoken languages. And so that actually is nice, because then it gives us a way to then again, look at the differences. So the things I've been interested in, in particular, are how space is used for co-reference, for example, for pronouns, for verbs to indicate who did what to whom. And that's really different than spoken languages, where you tend to have an affix that can mark reference or verb reference. And those can be stored in the lexicon. You have grammatical morphemes that carry out those functions. Whereas for sign language, you set up the reference with locations in space, and then I can direct a pronoun toward that location or a verb towards that location to indicate 'object', for example. And that location itself is really not a linguistic representation. That's what makes it really interesting because there are a number of them. And the way sign languages work is to use space semantically. So if I'm talking about a tall person in my discourse, I'm going to direct the sign at, or I'm going to ask a tall person, toward a high location in space.

 

06.

Prof. Stephen Wilson: I think in ASL it's SVO like in English, is that right?

 

Prof. Karen Emmorey: Yes, for basic word order. What's different is that more different word orders are allowed in ASL than in English. So ASL uses a lot of topic-comments. So you can move things to the front of the sentence as the topic, which allows you to have more things like OSV orders. So originally, people thought "Oh, there's no word order in ASL" until Scott Liddell discovered facial expressions that mark the topic. So you can't just move things around, they have to be linguistically marked. And once facial expressions were recognized as a grammatical marking, then it was recognized that we have a basic order, but there are lots of variabilities, that can be marked.

 

07.

Prof. Stephen Wilson: Then so we talked about phonology and syntax. How about the lexicon? So I think that maybe one of the most interesting differences is that there's a lot more potential for iconicity. Can you talk about the similarities and differences between the lexicon in sign language compared to spoken language?

 

Prof. Karen Emmorey: Yeah, so we've done a lot of work trying to understand the nature of iconicity in sign languages. Part of it is, that there's iconicity in spoken languages, so things like onomatopoeia. There are other spoken languages, like Japanese, that have ideophones, and can have a whole systematic sound symbolism system. It's just a little bit more reduced compared to sign languages, partly because it may be harder to make things sound like what they mean. It's a lot easier to make things look like what they mean. You have the hands to show actions, visual representations, and tracing of shapes, so there's just a potential for iconicity. I think if spoken languages could do more, they would do more.

 

08.

Prof. Stephen Wilson: So let's talk about the neural basis of sign language, which I think was investigated by Ursula Bellugi and her colleagues in the 90s. And I think you were involved in some of these studies. So can you talk about what was found with sign aphasia?


Prof. Karen Emmorey: So the first question, when people were thinking about the brain organization for sign language, was, what hemisphere of the brain is involved in signing? So this is before fMRI. The only data came from signers with stroke or brain injury. We knew from spoken language that if you have an injury to the left hemisphere you have frank aphasias. If you damage the right hemisphere, you don't have aphasia but can have spatial impairments, wayfinding, and spatial cognition. Sign language is really interesting because the work has shown these are languages, they have the same linguistic structure as spoken languages, syntax, phonology, and morphology. But they use space. They use space at every level. So, in phonology, you have location differences in the body. We've already talked about syntax using space for referents. So, maybe sign languages are represented in the right hemisphere because of the signal, the medium. Maybe they're more bilateral, or maybe they're in the left hemisphere. And what the data from the stroke patients clearly showed was that damage to the left hemisphere created sign language aphasias, so nonfluent aphasias, and fluent aphasias, whereas right hemisphere damage did not create aphasia. You did see spatial impairments with right hemisphere damage but they didn't come out in the language. They didn't appear aphasic. So that really told us that what the brain cares about is language. The left hemisphere is really the language hemisphere and it's not there because speech is auditory, for fast processing, or that the vocal tract is being used, because you see the same organization for sign languages. It's telling us, again, something about why the brain is organized the way it is.

 

09.

Prof. Stephen Wilson: Those studies with aphasia also showed that within the left hemisphere, the layout of sign language processing was kind of analogous to spoken language, right?

 

Prof. Karen Emmorey: At a basic level. If you look at frontal damage, what you see are nonfluent aphasias. The signers with frontal damage, comprehend pretty well but have a very effortful signing, and effortful articulation. Frustrating, as it is with spoken language aphasia. With more posterior damage, and temporal damage, what you see is fluent signing, but with lots of grammatical errors, it doesn't always make sense. Just like what you see with spoken language.

 

10.

Prof. Stephen Wilson: In some of your work in the last 20 years you've definitely documented some differences in the neural correlates of sign language processing and spoken language processing. I'm especially interested in PET and fMRI studies. Can you talk about the big picture findings there?

 

Prof. Karen Emmorey: So one of the things I've been interested in is spatial language. So, talking about spatial relationships because that's where sign languages and spoken languages are really different. In spoken languages, often to talk about space around you, the layout of a room, you're going to use prepositions. These closed-class grammatical items, 'on', 'under', 'around'. Whereas for sign languages, that's not the way spatial language is produced. Basically, you have hand shapes that represent the objects that you're talking about. So flat surfaces, cylindrical objects. And then it's the location of where I placed that flat hand or that curved hand in space, one on top of the other, one under it. That's telling you where those items are located. That's much more of a gradient type of representation than 'on' or 'under'. And so in looking at the neural representation for that, or the neural regions that are involved in producing those types of expressions, what we see is bilateral superior parietal involvement involved in the production of those expressions, that you don't see for spoken language. For spoken language, it tends to involve parietal regions, but it's usually left lateralized, and it's in the supramarginal gyrus. So it does seem that to produce these types of expressions, you have to recruit bilateral parietal regions. What's interesting, though, is the handshapes themselves, which are morphemic, are stored in the lexicon. So a particular hand shape for curved objects or flat surfaces. When you look at the neural regions that are involved in just retrieving the object classifiers or the object handshapes, then you see language regions. So left inferior frontal, and left inferior temporal, are engaged when you're retrieving those types of expressions. In contrast to the spatial aspects of those expressions. I've tried to tease apart the linguistic aspects of those constructions from the sort of more analog or gestural aspects of them.

 

11.

Prof. Stephen Wilson: Maybe this is just a philosophical question, but do you see that whole parietal involvement in the spatial aspect of sign language? Do you see that as being linguistic or not?

 

Prof. Karen Emmorey: So I actually am thinking of it as more of a cognitive system. So you, in order to do that kind of spatial mapping, you need to recruit the spatial, it's the interface between spatial cognition and language. So whether you think about interfaces as linguistic or not kind of depends on your point of view. But I think of it as that's where spatial cognition and language come together, and they come together in a different way for sign languages than for spoken languages because of the mapping between language and spatial cognition.

 

12.

Prof. Stephen Wilson: You also mentioned the left supramarginal gyrus there. I think you said for spoken language, right? But you also see pretty clear involvement of supramarginal gyrus and kind of motor control of the hands for sign too, right?

 

Prof. Karen Emmorey: And in fact, when we did an experiment just looking at the production of different types of signs, so one-handed or two-handed signs, or body-anchored signs, signs that are produced at locations on the body, to try to understand what neural regions were involved in the production of different types of signs. And the supramarginal gyrus was the left supramarginal gyrus. A little bit on the right, a little bit bilateral. But it was the region that was engaged in the production of all sign types. And so we are hypothesizing, along with other data, that the left supramarginal gyrus is particularly involved in the assembly of phonology for sign languages. It may also be involved in spoken languages as well. It's probably not exactly the same region, but the area is involved in combining abstract phonological units for sign.



往期推荐

石锋 | 演化语言学的宏观史、中观史和微观史
蔡维天|王心凌语言学
理论与方法专栏 | 并列与句法-语篇接口
人物专栏 | Noam Chomsky教授访谈
学术访谈| Noam Chomsky 答学生问

本文版权归“理论语言学五道口站”所有,转载请联系本平台。


编辑:闫玉萌 赵欣宇 雷晨

排版:闫玉萌 赵欣宇 雷晨

审校:李芳芳 田英慧

英文编审责任人:赵欣宇


您可能也对以下帖子感兴趣

文章有问题?点此查看未经处理的缓存