diff options
author | JJ | 2024-05-05 22:43:17 +0000 |
---|---|---|
committer | JJ | 2024-05-05 22:43:17 +0000 |
commit | 500512d7c1ea685db03413349e5696190498b4f2 (patch) | |
tree | 84ef27d2b5f67cf2bbff4c131b2d65ec8ad460b6 /linguistics | |
parent | 2f17bc254bd94a32847ea71fe0433acf364c1c39 (diff) |
meow
Diffstat (limited to 'linguistics')
-rw-r--r-- | linguistics/syntax.md | 122 |
1 files changed, 99 insertions, 23 deletions
diff --git a/linguistics/syntax.md b/linguistics/syntax.md index 17fd901..f9f1333 100644 --- a/linguistics/syntax.md +++ b/linguistics/syntax.md @@ -3,7 +3,7 @@ layout: linguistics title: linguistics/syntax --- -# morphology and syntax +# morphosyntax Morphology is the study of word formation. Syntax is the study of sentence formation.<br> Specifically, both morphology and syntax focus on **structure**. @@ -15,7 +15,7 @@ These notes are ordered in a way that I feel builds upon itself the best. This i Certainly, all of syntax cannot be taught at once. Yet the desire to generalize and apply what one has learned to real-world examples is strong, and it is extraordinarily difficult to teach syntax in a way that builds upon itself naturally. This is my best attempt, but it will fall flat in places: when it does, I do recommend either skipping ahead or being content with temporarily (hopefully temporarily) not knowing what's going on. -<details markdown="block"> +<details markdown="block" open> <summary>Table of Contents</summary> - History of Syntax @@ -35,21 +35,29 @@ Certainly, all of syntax cannot be taught at once. Yet the desire to generalize - Lexical Entries [SKS 6.8] - [Minimalism](#minimalism) [n/a] - [Merge, Part II](#merge-part-ii) - - Projection [SKS 5] + - [Projection](#projection) [SKS 5] - Selection [SKS 8] -- Move [SKS 8] +- [Move, Part I](#move-part-i) [SKS 8] - [Affix Hopping](#affix-hopping) + - Verb Raising + - Subject-Auxiliary Inversion - Head Movement [SKS 8.3] - - [Subject Raising](#subject-raising) [SKS 12.4] +- Move, Part II - Wh- Movement [SKS 10] + - Topicalization + - Phrasal Movement + - [Subject Raising](#subject-raising) [SKS 12.4] - Agree - Theta Roles [SKS 6.8.1] - Locality - [Binding](#binding) [SKS 7] - Raising & Control [SKS 9] - Advanced Syntax + - [On languages other than English](#on-languages-other-than-english) - Negation - Ellipsis + - Shifting + - Scrambling - Parsing - References @@ -72,10 +80,10 @@ Certainly, all of syntax cannot be taught at once. Yet the desire to generalize ## Merge, Part I We concluded the following from our excursion into morphology: -- words are composed of morphemes -- morphemes come in categories -- morphemes combine in a regular fashion -- morphemes can be silent +- words are composed of *morphemes* +- morphemes come in *categories* +- morphemes *combine in a regular fashion* +- morphemes can be *silent* Surprisingly (or unsurprisingly), we shall see that these ideas generalize to sentence structure as a whole. @@ -89,20 +97,34 @@ Why are proper names $D$s? Why is it possible to say either *I moved the couches These inconsistencies can be all addressed by one (strange) concept: the idea of *silent morphemes*, invisible in writing and unpronounceable in speech. We represent such morphemes as ∅, and so may write the earlier strange sentence as *I moved ∅-couches*. -... +These silent morphemes are extremely useful to our syntactic analyses. Consider our examples above. With silent morphemes, we can describe the verb *moved* as taking in two $D$s as arguments: *I* and *couches*. Without silent morphemes, we would have to account for both $D$ phrases and $N$ phrases as arguments, duplicating our lexical entry, destroying the structural similarity to *I moved the couches*, and ultimately duplicating our work. + +```latex +\begin{forest} +[V + [D [I, roof]] + [V_D + [V_{D,D} [moved]] + [D + [D_N [∅]] + [N [couches]]]]] +\end{forest} +``` -p-features | f-features ------------|----------- -the | $D_{N}$ -a | $D_{N (-plural)}$ -∅ | $D_{N (+plural)}$ +So silent morphemes are extremely handy. But: what is stopping us from using an *excess* of silent morphemes? After all, if they're just morphemes, they can go anywhere. And if they're silent, we can't observe *where* they go. We will revisit this topic, but for know, we shall consider the list of silent morphemes to be very finite, and their use regular and a rarity. + +p-features | f-features | s-features +-----------|------------|----------- +the | $D_{N}$ | definite +a | $D_{N (-plural)}$ | indefinite +∅ | $D_{N (+plural)}$ | indefinite p-features | f-features | s-features -----------|------------|----------- will | $T_{D,V}$ | future -ed | $T_{D,V}$ | past ∅ | $T_{D,V}$ | present -to | $T_{D,V} (-tense)$ | infinitive +to | $T_{D,V}$ | infinitive These tables are using notation and language formally introduced at the end of the next section. Ignore them for now. @@ -112,8 +134,6 @@ So far, we've been discussing syntax and giving examples using somewhat informal ### X'-theory -**X'-theory** (x-bar theory) is a notation originally put forth by Chomsky... - ... ### Bare Phrase Structure @@ -224,11 +244,15 @@ What exactly a lexical entry contains is up to some debate. The English language - semantic features (**s-features**): the role of the entry and its arguments in the sentence - Not all lexical entries have s-features. For tense/aspect/etc, these are their appropriate tense/aspect/etc. For verbs, these are typically *theta roles* (which we shall address later). +This is the formalism expressed in our tables earlier. +Heads select for the features of their complements and project their own features. +Adjuncts select for the features of their heads but do not project their features. + ## Minimalism -[Minimalism](https://en.wikipedia.org/wiki/Minimalist_program) is a *program* that aims to reduce much of the complexity surrounding syntactic analysis. While our theories may end up providing for adequate analyses of natural languages, this is not enough. Phrase structure rules, too, were *adequate*: yet we rejected them for their sheer complexity. If we can explain what we observe in a simpler framework, *we should adopt that framework*. Much of modern advancements in syntactic analysis have come out of Minimalism: bare phrase structure, in particular. +[Minimalism](https://en.wikipedia.org/wiki/Minimalist_program) is a *program* that aims to reduce much of the complexity surrounding syntactic analysis. While our theories may end up providing for adequate analyses of natural languages, this is not enough. Phrase structure rules, too, were *adequate*: yet we rejected them for their sheer complexity. If we can explain what we observe in a simpler framework, *we should adopt that framework*. Much of modern advancements in syntactic analysis have come out of Minimalism: the notation of bare phrase structure, in particular. -As with most Chomskyan theories: Minimalism has a *strong* focus on natural language facilities. A core thesis is that *"language is an optimal solution to legibility conditions"*. I don't find this interesting, so I won't get into it, and instead will focus on the definitions and usage of the basic operations rather than the motivation for them. +As with most Chomskyan theories: Minimalism has a *strong* focus on natural language facilities. A core thesis is that *"language is an optimal solution to legibility conditions"*. I don't find this all too interesting, so I won't get much into it, and instead will focus on the definitions and usage of the basic operations rather than the motivation for them. Modern Minimalism considers into three *basic operations*: <span style="font-variant: small-caps;">Merge</span>, <span style="font-variant: small-caps;">Move</span>, and <span style="font-variant: small-caps;">Agree</span>. All that we will discuss can fall into one of these basic camps. @@ -240,9 +264,31 @@ Merge is *the* fundamental underlying aspect of syntax and arguably language as ### projection +We have talked casually much about the idea of heads "projecting" their type, and this informing the syntactic structure of parsed sentences. We now formally discuss this. + +The **projection principle** states that *the properties of lexical items must be satisfied* (chief among lexical properties being selectional properties). This is a simple statement, but has profound implications: in particular, when we observe that properties of lexical items appear to *not* be satisfied, there is likely something deeper going on. + +... + ### selection -## Move +## Move, Part I + +<span style="font-variant: small-caps;">Move</span>(α, β) + +All movement falls into one of the following categories: +- Head Movement + - T-to-V: affix hopping + - V-to-T: verb raising (was / be) + - T-to-C: subject-auxiliary inversion +- Phrasal Movement + - A-movement (argument movement) + - subject raising + - A'-movement (non-argument movement) + - topicalization + - wh-movement + +We shall visit these each in depth. ### affix hopping @@ -371,11 +417,14 @@ English's first-person present does not inflect the verb, and so we must introdu This now makes our top-level phrase type $T$ instead of $V$. It will not remain so for very long, as we shall see in <span style="font-variant: small-caps;">Agree</span>. +### verb raising + +### subject-auxiliary inversion ### head movement ### wh-movement -### vP shells +### subject raising Consider the following sentence: *Alice will speak to the assembly*. With our current knowledge of syntax, we would diagram it as so: @@ -405,7 +454,7 @@ The $D$ *Alice* here is the subject. While replacing it with some $D$s produces Observe, however, that our tree structure suggests that $T$ - and only $T$ - is involved in the selection of $Alice$ as the subject, given locality of selection and the extended projection principle. But this can't be quite right. Plenty of other sentences involving the $T$ *will* are just fine with inanimate subjects: *Time will pass*, *Knowledge will be passed on*, etc. (Notice that *Alice will pass* and *Alice will be passed on* are similarly ungrammatical). How do we reconcile this? -We now introduce the idea of **subject raising** / $vP$ shells. Our observations above point towards the $V$ of the sentence rather than the $T$ selecting for the subject $D$ - somehow. This selection would break our guiding principle of locality of selection. But this behavior *does* occur. Can we extend our model to explain this, *without* modifying the locality of selection that has been so useful thus far? We can, indeed, with movement, and illustrate so in the following tree. +We now introduce the idea of **subject raising** / $vP$ shells. Our observations above point towards the $V$ of the sentence rather than the $T$ selecting for the subject $D$ - somehow. This selection would break our guiding principle of locality of selection. But this behavior *does* occur, and as an empirical science we must adjust our theory accordingly. Can we extend our model to explain this, *without* modifying the locality of selection that has been so useful thus far? We can, indeed, with movement, and illustrate so in the following tree. ![`[T [D Alice] [T_D [T_{D,V} will] [V [D (subj)] [V_D [V_{D,P} speak] [P [P_D to] [D [D_N the] [N assembly]]]]]]]`](subject-movement.png) <details markdown="block"> @@ -449,6 +498,15 @@ This subject raising is an example of **A-movement** (argument movement). A-move How do pronouns work? +First, some definitions. We distinguish several classes of pronouns: +- **anaphors**: *reflexive* and *reciprocal* pronouns i.e. *herself*, *each other*, ... +- **personal pronouns**: *her*, *him*, *they*, *it*, ... +- **possessive pronouns**: *ours*, *theirs*, *hers*, *his*, ... +- ... + +Every pronoun (pro-form, really) has an **antecedent**: that is, the phrase or concept it is in *reference* to. In contrast to pronouns, we also have **r-expressions**: an **independently referential** expression. These are names, proper nouns, descriptions, epithets, and the like: ex. *Alice*, *British Columbia*, *the man on the corner*, *the idiot*, etc; and have no antecedent. + +We say that a node and another node are **coreferential** (or **co-indexed**) if they refer to the same concept or entity. On tree diagrams, we often refer to this with numerical ($_0$, $_1$, ...) or alphabetical ($_i$, $_j$, $_k$) subscripts. (Though we could also indicate this with arrows, we prefer to reserve those for movement, so as to not clutter our diagrams too much.) This is a useful notion when it comes to pronouns. ... The theory of binding operates under three fundamental principles. @@ -465,8 +523,26 @@ Our principles imply various things. Principle A implies that: ### raising and control +Consider the following sentences: +- *Alice seems to sleep often.* +- *Alice hopes to sleep often.* + +With our current knowledge, we would diagram these sentences near-identically. Yet a closer investigation reveals that they are in fact deeply structurally different. +- *It seems Alice sleeps a lot.* +- \* *It hopes Alice sleeps a lot.* ## Advanced Syntax +### on languages other than english + +We have so far approached our discussion of syntax entirely from the point of view of the English language. All of our motivations - our rationales, our counterexamples - have been given and centred around English. This begs the question: just how much of this *holds*, cross-linguistically? What of all the other wide and varied languages of the world - which, clearly, our frameworks must have been built not only to accommodate but to represent *well*, given we are discussing them now, more than fifty years after the Chomskyian revolution and more than a century after the field of linguistics recognized its Indo-European biases? + +We have discussed some principles that, clearly, cannot be a feature of all natural languages: like do-support. However, other concepts - like the Subjacency Condition - seem possibly broad enough to be applied across a variety of languages. Is that true? Is *anything* broad enough? (Alternatively: does a [universal grammar](https://en.wikipedia.org/wiki/Universal_grammar) exist?) + +This notion of *principles* that occur for some languages and do not occur for others forms what is either the framework of *Principles and Parameters*, or *Government and Binding Theory*. I do not understand the difference between them, and suspect what is above to be a mixture of both as neither were explicitly mentioned. Nevertheless, everything given here is for English, not some cross-linguistic model of the mind. English remains useful by virtue of being mine and many's L1 language - and by being such a *mess* of a language that its structure cannot be explained away trivially. + +### negation +### ellipsis + ## References - ✨ [An Introduction to Syntactic Analysis and Theory](https://annas-archive.org/md5/11bbf70ff9259025bc6985ba3fa4083b) |