1. Language and Linguistics
Describe distinct features of language as a sign system, and explain each feature with concrete examples.
Human language delivers the meaning through the phonetic ‘signs’ and thus, we can define it as a sign system. These three factors below are the unique features of language that distinguish it from other sign systems, animal signals, etc.
1) Discreteness
Since human brains genetically inherit the ability of ‘recognizing’ language, which is conceiving the discreteness of sounds that compose a certain signal, people can combine mere insignificant sounds and make a meaningful sign(s) out of it. Therefore, one sound on its own may convey one meaning, multiple sounds combined in a particular order convey a different meaning.
For example, (a) /p/, /t/, /a/ sounds do not have significant meaning when they are separated as discrete units but if we combine them, we can build multiple sets of words: ‘pat’, ‘tap’, ‘apt’ which clearly have meanings. For another, let’s look at the word pairs below.
(b) Back <--> Pack
Cap <--> Tap
Heat <--> Meat
Each word of the pairs consists of just one different sound (in this case, one consonant) but it leads to a totally distinct meaning. This means that the basic units of speech can be categorized as belonging to distinct categories. Larger complex messages can also be broken down into smaller, discrete parts.
2) Arbitrariness
This is the fact that the symbols we use to communicate do not have any natural form or meaning in and of themselves. It means that words we read, write, listen, and speak do not have a natural essence or a certain connection to them, but we have assigned these words arbitrarily to their particular meanings.
For example, look at the words below.
(c) French: arbre
Dutch: boom
Macedonian: дрво
Chinese: 树 (shù)
Hindi: पेड़
Korean: 나무
All of these words refer to a ‘tree’ but in different languages. There is no significant reason why these words vary in such diverse ways or why we Koreans call a ‘tree’, ‘나무’. In fact, it is just the words we have agreed to mean or signal the idea of a tree. Therefore, these phenomena are called ‘social convention’.
In addition, arbitrariness is not only found in the scale of words but also found in various sentences and the grammar rules that dwells in the sentences. For instance,
(d) Korean: 그녀는 그를 사랑한다.
English: She loves him.
Hebrew: היא אוהבת אותו. (Loves she him.)
You can see the differences in word orders in different languages. In Korean, we tend to place the words in the following order, SOV (Subject + Object + Verb) while other languages place it differently such as SVO (in English) or VSO (in Hebrew).
cf) Animal language is non-arbitrary because there is a very clear connection between the message and signal used. The set of signals used in animals’ communication is finite.
3) Creativity
This is another aspect of language, which is linked to the fact that the potential number of utterances in any human language is infinite. Which means the ability to use language to say new things is limitless. Thus, the term is also known as ‘productivity’ or ‘open-endedness’. Unlike animals, that have a limited number of sounds to express their needs, language serves human beings to communicate whatever they want; humans can and do manipulate the linguistic resources to produce new expressions and new sentences. Examples below.
(e) desire-+able à desirable
un-+able à unable
un-+desire-+able à undesirable
As you can see, if the suffix ‘-able’ is put on the word ‘desire’, the word ‘desirable’ is generated. And if you put the suffix on the prefix ‘un-‘, it leads to the different word ‘unable’. Meanwhile, when you combine the morphemes, it becomes another word, ‘undesirable’. New words can always be created or produced by various methods such as ‘syntagmatic’ or ‘pragmatic’ ways.
In addition, the property is also can be found in structuring sentences. Look at the sentences below.
(f) Danced.
MJ danced.
MJ danced the salsa.
This is the process of making sentences with syntagmatic relations: adding phrases in a linear direction.
(g) The man danced the salsa.
The man watched the salsa.
The man watched the tango.
Examples above are the process of making sentences with paradigmatic replacements: replacing the sentence’s object, verb, etc.
2. Phonetics and Phonology
Explain with concrete examples of three tools with which you can figure out phonemes from allophones.
This can be explained as ‘phonemic analysis’. If two or more sounds are phonetically similar, we can identify their functions as either phonemes or allophones within the language of each sound, and to do this, we need to be able to do a phonemic analysis. The sounds will work as either separated phonemes used in a contrastive way or allophones which appear in complementary distribution. The process of analysis could be elaborated with three tools or steps.
1) Check for a minimal pair / near-minimal pairs (contrastive distribution)
Before we look into the tool, we need to know that when two phonetically similar sounds, X and Y, occur in indistinguishable environments and the substitution of one another brings a change in meaning, then the sounds in question are in contrastive distribution. Phonetically similar sounds that occur in this kind of distribution are phonemes. The best way to identify these phonemes is to find their minimal pair / near-minimal pairs.
Minimal pair is a pair of words that differ only with regards to them and nothing else. It is one in which the words differ minimally, that is, about one sound only. For example,
(a) sink / zinc in English are examples of a minimal pair.
Although the word-initial position is the one that is the commonest for finding contrastive distribution for sounds, other positions in the word are fairly important, e.g.
(b) [bɐsɪz] ‘busses’ [bɐzɪz] ‘buzzes’ (c) [pi:s] ‘peace’[pi:z] ‘peas’
The data in (d), (e), and (f) give us minimal pairs with three environments for the contrastive distribution of the consonants [m] and [n]:
(d) [mɔk] ‘mock’[nɔk] ‘nock’ (e) [sɪmə] ‘simmer’[sɪnə] ‘sinner’ (f) [si:m ‘seem’ [si:n] ‘seen’
The result of finding minimal pairs for the sounds [s] / [z] and [m] / [n] is that the pairs of sounds are phonemes: /s/, /z/, /m/ and /n/. Plus, if you find a minimal pair like in these cases, obviously the two sounds cannot be in complementary distribution, because the two sounds are occurring in exactly the same context.
Next, near-minimal pairs are having more than one point of difference between a pair of words, but in which the additional difference is rather irrelevant to affect the difference between the two sounds in question. For instance:
(g) [ˈpleʒər] ‘pleasure’ and [leðər] ‘leather’
Words above qualify as a near-minimal pair, since the sounds immediately adjacent to the target sounds, [ð] and [ʒ], are the same in both words: [e] before the target sound and [ə] after it. The additional difference cannot be said to be owing to the context. Therefore, like minimal pairs, near-minimal pairs are usually sufficient to demonstrate that two sounds are separate phonemes in a language.
2) Check for complementary distribution
When two phonetically similar sounds, X and Y, occur in mutually exclusive environments where one occurs the other does not and vice versa, and the substitution of one another does not bring a change in meaning, then the sounds in question are in complementary distribution. Phonetically similar sounds that occur in complementary distribution are allophones of a phoneme.
In this case, we have to look at some more examples than in tool 1). Examples below are a comparison among [l], a voiceless alveolar lateral approximant [l̥], aka ‘a clear l’, and a velarized alveolar lateral approximant [ɫ], aka ‘a dark l’.
(f) [l] (g) [l]
[lip] lip [Ꞌbeli] belly
[li:p] leap [bɪꞋli:v] believe
[let] let [sɪˈlekt] select
[læp] lap [Ꞌsæli] Sally
[lɐk] lock [Ꞌkʰɐlə] colour
[blɒk] block [Ꞌkʰɔləm] McLeod column
[laɪk] like [Ꞌwaɪli] wily
[laʊd] loud [əꞋləʊn] alone
[ləʊ] low [Ꞌbʊli] bully
[lɪə] Lear [əꞋlu:f] aloof
(h) [l̥] (i) [ɫ]
[pʰ l̥eɪ] play [pʰɪɫ] pill
[pʰl̥aɪ] ply [fi:ɫ] feel
[pʰ l̥əʊsɪv] plosive [tʰeɫ] tell
[pʰl̥æsɪdˌ] placid [seɪɫ] sail
[pʰl̥ʊɹl̩] plural [kɐɫt] cult
[kʰl̥eɪ] clay [biɫdɪŋ] building
[kʰl̥aʊd] cloud [bɐɫk] bulk
[kʰl̥ə ʊs] closeAdj [fiɫm] film
[kʰl̥ɔ :] claw [bʊɫ] bull
[kʰl̥ɪə ] clear [fu:ɫ] fool
The environments for the sound [l], [l̥], and [ɫ] are listed below.
(j) (k) (l) (m)
[l] [l] [l̥] [ɫ]
# __ ɪ e __ i pʰ__ eɪ ɪ __ #
# __ i: ɪ __ i: pʰ__ aɪ i: __ #
# __ e e __ ə pʰ__ əʊ e __ #
# __ æ æ __ i pʰ__ æ eɪ __ #
# __ ɐ d __ I pʰ__ ʊ ɐ __ t
#__ ɒ k __aʊ kʰ __ eɪ ɪ __ d
#__ aɪ b __ e kʰ __ aʊ ɑ __ k
# __ aʊ #b __ ɔ kʰ __ aɪ i __ m
# __əʊ #s__ aɪ kʰ __ ɔ: ʊ __ #
# __ ɪə #f __ əʊ kʰ __ ɪə u: __ #
You can see a generalization emerging from the lists. The lists in the columns (j) and (k) are both for the clear [l]. The column (l)’s list is for the voiceless [l̥], and the list in column (m) is for dark [ɫ]. So in the list (j), we can find that [l] occurs word-initially and followed by a vowel which appears to be any vowel. And in the list (k), we can generalize that [l] occurs between two vowels. Next, the environments in the list (l) is a preceding aspirated voiceless bilabial and velar plosive and a following vowel. The generalization for the environments in the list (m) is a preceding vowel and a following word-boundary or consonant. The three sounds are thus in complementary distribution: [l] occurs word-initially or between two vowels or between a consonant (other than [pʰ] or [kʰ]) and a vowel, [l̥] occurs after an aspirated voiceless labial and velar plosive, and [ɫ] occurs word-finally or before another consonant. The conclusion we draw regarding the sounds in target is that they are allophones of the same phoneme.
Besides the above example, one more requirement you need to consider in confirming an allophonic relationship is phonetic similarity. The sounds in question must be sound similar phonetically. If this requirement is not met, you cannot say the suspicious sounds are allophones of the same phoneme even when they are presenting complementary distribution.
3) Check the broadest distribution
So In cases like we addressed in the example of the second step/tool, the conclusion we draw upon the sounds in question is that they are allophones of the same phoneme. Then, a question that arises at this point is which phoneme are they allophones of? The answer to that question is that the phoneme of which they are allophones is known by the allophone that has the least restricted distribution (=broadest distribution). For example, in the case above, [l] is the least restricted allophone. If we look for more data on the distribution of [l], we will in fact find it occurring in other environments that are not shared by the other two allophones, as in cloud, block, tablet, slide, flow, etc. The environments in these words for the occurrence of [l] are as follows:
(n)
k __aʊ
b __ e
#b __ ɔ
#s__ aɪ
#f __ əʊ
Thus, [l] also occurs following a consonant other than [pʰ] and [kʰ]. Therefore, the phoneme constituting [l], [l̥], and [ɫ] is the allophone that is most widely distributed in the broadest range of contexts: [l].
*The concrete examples on this task are largely referred to the paper, “Introduction to Phonetics and Phonology” by Prof. Pramod Pandey of Jawaharlal Nehru University, New Delhi.
3. Morphology
What do constitute linguistic information each word has? Show them with concrete examples.
Even though the definitions of dictionaries are not satisfactory enough but it is clear that a word of a spoken language can be defined as the smallest sequence of phonemes that can be uttered in isolation and also it is the fundamental unit of a sentence.
The question is, what do we know exactly when we say we appreciate a word? The answer could be diversified by the given perspective but let me show you the main linguistic information each word has.
1) Sounds
Knowing the phonetic value would be the basic knowledge of the given word. For instance,
(a) the living room (b) the living creature
In the case (a), ‘living’ is a noun which is making a compound word combining with ‘room’, and in the phrase (b), ‘living’ is an adjective modifying the word ‘creature’. A compound word like (a) generally falls the stress on only the first word composing yet there can be a stress on the word ‘creature’ in cases like (b) since it is a noun phrase rather than a compound word.
2) Meanings
The next thing important is ‘meanings’. For example,
(c) Frodo put a ring around his neck just now. (d) Sam wanted a ring to make a proposal.
‘a ring’ has a referent in the case (c) however, in the case (d), a referent of ‘a ring’ is unclear. The difference of meaning of an indefinite article ‘a’ has an intimate relation with the meanings of the words ‘put’ and ‘wanted’.
3) Parts of speech (lexical categories)
The precise classification of lexical categories is also very essential in using a word. Let’s see one of the common mistakes & confusion of them.
(e) *The group were able to identity the most serious academic problem.
(f) *At present, there is a lot of compete for good jobs.
This is a noun/verb confusion. (e) is an incorrect use of noun; ‘identity’ should be the verb form, identify. Also, ‘compete’ in the sentence (f) is misused; it should be the noun form, competition.
Moreover, we can see a bit trickier example: ‘there’.
(g) There is a man in the room. (h) ? A man is in the room.
(i) The man is in the room. (j) ? There is the man in the room.
(g) would be grammatical but (h) is somewhat awkward because there exist ‘A man’ which is sort of a new information in the place of a subject without the use of ‘there’. Meanwhile, when we use ‘the man’, the (i) without the use of ‘there’ is more adequate rather than (j). The reason is that ‘the man’ is someone the speaker and the listener know about. ‘there’ is an expletive pronoun that acts as a subject instead of the word referring to the new information such as ‘a man’ in the sentence (a). As we can see from the examples above, we need to understand the exact parts of speech and its property of a word to make use of it.
4) Subcategories
The next one we have is subcategories. This serves as an important role when we comprehend or form a sentence. For instance, we can subcategorize verbs in English with various aspects, the most typical concept is the classification between action verbs and stative verbs.
(k) Sheldon is knocking the door. (l) *Sheldon is adoring the door.
The reason why (k) is correct is that ‘knock’ can be used in progressive form since it is an action verb. Meanwhile, (l) becomes a wrong sentence because ‘adore’ is a stative verb that can’t be used as a progressive form. We need to be fully aware of subcategories even the words are in the same part of speech.
5) Grammatical use
The last information that should be included in a word is a method used in a sentence.
(m) *She shouted him who was disrespectful.
(n) She shouted him, who was disrespectful.
Like in the sentences above, there are information need to be considered in the use of ‘who’. First, if ‘him’ is used as an antecedent like in the case (m), it becomes ungrammatical when the clause is a ‘defining relative clause’. Therefore, it needs to be formed a ‘continuative relative clause’ as in (n). Since pronouns like ‘him’ defines a referent by itself, if we use a ‘who’ for ‘defining relative clause’, the double-defining problem arises.
The above 5 aspects of information are the most essential ones that a word should contain. And sometimes, there are occasions that we ought to employ other elements. These kinds of information are also called sources in a lexicon, which is a dictionary in our brain. Therefore, we commonly call words lexical items.
4. Syntax
What are phrase structure rules and transformational rules? Discuss properties of each rules and their roles in grammar.
Phrase structure rules and transformational rules are two of the major Syntax rules proposed by Noam Chomsky in his theory of early transformational generative grammar. Transformational generative grammar is the grammar rule that generates an infinite number of sentences with few of the simple syntax rules and systems; it pursues universal and powerful rules for generating sentences.
1) Phrase structure rules
In generative grammars, sentences are ‘generated’ through the grammar rules. Phrase structure rules exist as rules that generate the legitimate structure of sentences. They are used to break down a natural language sentence into its constituent parts, also known as syntactic categories, including both lexical categories and phrasal categories. The basic form is
(a) XPàX-Y, meaning that the constituent XP is separated into the two subconstituents X and Y.
Noun phrase’s rule which is one of the main examples is NP->(D)-(AdjP+)-N-(PP+), D, AdjP, and PP refers to determiners, adjective phrases, prepositional phrases each. Constituents with parenthesis mean it’s optional and others are obligatory. Also, + means you can generate two or more phrases consecutively. Words that mark the central role in the phrases define the name of the phrase such as ‘noun phrase’, ‘verb phrase’, etc. Let’s check a few more examples.
(b) Adjective phrase AdjPà (I)-A. (I) is an intensifier. Prepositional phrase PPàP-NP.
One thing you should be careful about is that even when the phrase that modifies the central word is made of one word, it is called a phrase. The rule of a verb phrase, which can be said as the most essential phrase, is VPàV-(NP/AdjP/PP) and we can even demonstrate a sentence rule as
(c) SàNP-VP.
The next thing important is that you can explain the five forms of traditional grammar rule within the rule of a verb phrase. See how it goes. ↓
(d) Form 1: V. ex) run.
Form 2: V-AdjP. ex) kept silent.
Form 3: V-NP. ex) enter the café.
Form 4: V-NP-NP. ex) owed him an apology.
Form 5: V-NP-AdjP. ex) helped me get tough.
Numerous forms of sentences can be generated beside these 5 forms of traditional grammar rules within the phrase structure rules.
Furthermore, you can indicate the analyses through phrase structure rules via a tree diagram or a labelled bracketing like the example below.
If you put the sentence by a labelled bracketing, it would be
‘[S [NP1 [D1 The] [N1 girl]] [VP [V saw] [NP2 [D2 a] [N2 dog]]]]’. It shows the relations of words very simply with square brackets but its hardly seen nowadays.
In addition, you can explain the problems of Recursiveness, which is the constant repetition of phrases, or the Ambiguity problem that occurred when a sentence’s meaning could not be clarified.
However, phrase structure rules may not match the sentences requiring more segmentalized subcategories because they create component structures with information from the most generalized categories. Thus, we need certain restrictions regarding subcategories. Plus, selectional restrictions in terms of meaning can be put as per specific meaning. Examples below.
(f) A: He devoured an apple. is correct but B: *He devoured.” is ungrammatical. Also, C: She scolded her child. is correct but D: *She scolded her desk. is not.
2) Transformational rules
Noam Chomsky suggests that a single structure of a given sentence is made out of the visible surface structure and the deep structure that is not easily be seen. Every sentence departs from the latter and gets to the former. The transformational rules act upon the deep structure(DS, hereafter) and thus generate the surface structure(SS, hereafter) different from the DS. E.g.
(g) DS: Amy loves Lorry. à SS: Lorry is loved by Amy.
DS: The policeman arrested the boy. à SS: The boy was arrested by the policeman.
Examples above are the transformation from active to passive voice. For another example, you can use the transformational rules of shifting interrogatives in order to generate the interrogative sentence.
(h) DS: He was writing a letter. à SS: Was he writing a letter? (shift of an auxiliary verb)
DS: Walter loves whom. à SS: Whom does Walter love?
Aside from the examples above, as we can see in the pp. 242~244 of 『현대 영어학의 이해』, there are transformational rules of changing sentence form, verb phrase, adjective phrase, prepositional phrase, and so on.
However, alike the phrase structure rules, transformational rules also have restrictions. First, there is coordinate structure constraint which means we cannot move the part of the structure conjoined by coordinative conjunction though you can move the whole. Here is an example.
(i) Did you meet Skyler and Marie yesterday? à *Who did you meet John and yesterday?
Also, there is a subjacency constraint meaning that a constituent of a subordinate clause cannot move beyond two or more NP/S in a time. And there is a Tensed S constraint meaning that no constituent can move beyond the sentence or the phrase with tense. Instances below.
(j) Did he claim (that) he had lost? à *What did he make the claim that he had lost?
It seems the student likes boxing. à *The student seems likes boxing?
5. Semantics
What are ‘semantic roles’? Why do they play an important role in deciding grammaticality in sentences?
A semantic role is the term appeared in Sentential Semantics. A sentence is composed of various components and ‘words’ are the most representative ones. However, just because we link the number of words, it does not form a meaningful sentence. Therefore, not only we need to arrange the words in the right syntactic structure, but also, we should consider the semantic factors as well.
So, semantic roles, also known as thematic relations, are the various roles that a noun phrase may play with respect to the action or state described by a governing verb, commonly the sentence’s main verb. In other words, it means we see the words according to the meaningful role they play in language instead of seeing them as the container of meaning. For example, in the sentence
(a) Jamey broke the window. Jamey is the doer of the breaking, so he is an agent, and the window is the item that is broke, so it is a patient.
With diverse concepts and usages, there are many kinds of semantic roles like these, but I will address a few more of the major terms to understand semantic roles in general.
(b) I put the cards on the table.
In this sentence, ‘the table’ is a theme. ‘Theme’ is something that undergoes the action but does not change its state. However, it is sometimes used interchangeably with the concept of patient that we learned in (a).
(c) The cook cut the cheese with a knife. (d) She used a paintbrush to make a graffiti.
You can see the concept of instrument in these examples. Instrument is something used to carry out or perform the action; in this case ‘a knife’ and ‘a paintbrush’ in each sentence.
(e) She went to Venice. (f) I slept at the park.
I brought these examples to describe goal and location this time. The two are somewhat similar but different. Goal is where the action is directed towards so in this case ‘Venice’ in (e), and location is where the event or an action takes place, so it is ‘the park’ in (f).
(g) Edward saw the star. (h) Jenna feared the monster.
Another thing I want to explain is percept/stimulus and experiencer. Percept/stimulus are the entity perceived or experienced and experiencer is the entity that is aware of the action or state described by the predicate, but which is not in control of the action or state. So in this case, ‘the star’ and ‘the monster’ are the percept/stimulus; ‘Edward’ and ‘Jenna’ are the experiencer of them.
Last but not least, let’s take some examples that demonstrate to us why do we need to consider these semantic roles:
(i) The girl opened the door. (j) The key opened the door.
Syntactically, ‘the key’ and ‘the girl’ are equivalent because both are subjects and both open the door but obviously, they do different things. ‘The girl’ instigates the action, and ‘the key’ is used to perform the action. (Agent and Instrument) For another example,
(k) *Edward saw the telescope and the star.
This sentence is wrong because the coordinate conjunction ‘and’ is supposed to link the two objects that have the same semantic role but the ‘and’ in (k) does not. ‘The telescope’ is an instrument and ‘the star’ is a stimulus.
What I wanted to explain is that since semantic roles can cause a significant impact on grammaticality of a sentence like such examples, you need to be well informed of these semantic roles and use it adequately to form a meaningful sentence as well as of the consideration of the linguistic meaning and grammatical relations.
6. Pragmatics
What are implicature and entailment? Explain them with concrete examples, and discuss the types of implicature with concrete examples.
First, the field of pragmatics deals with the principles of language use that explain how extra meaning is conveyed without being encoded in language. Therefore, we need to investigate the meaning of a speaker, i.e. pragmatics concentrates more on the analysis of what people mean by their utterances than what the words or phrases in those utterances might mean by themselves. Thus, as a part of this, in order to comprehend a contextual meaning, you need to interpret the implicature and the entailment of a given utterance. Both deal with assumptions made by the listener or reader about a situation.
1) Implicature
An implicature is a cancellable implication: an inference from an utterance which we take the utterance to imply on its face, by ‘default, but which may in the context of other information nonetheless not be true even if the utterance is true. For example,
(a) Mary had a baby and got married.
This strongly suggests that Mary had the baby before the wedding, but the sentence would still be strictly true if Mary had her baby after she got married. Furthermore, if we add the qualification “-not necessarily in that order” to the original sentence, then the implicature is cancelled even though the meaning of the original sentence is not altered.
The philosopher H.P. Grice who invented the term distinguished conversational implicatures, which arise because speakers are expected to respect general rules of conversation, from conventional ones, which are tied to certain words such as “but” or “therefore”. Take one following example:
(b) A: Do you like Algebra? B: Well, let’s just say I don’t jump for joy before class.
Here, B does not say, but conversationally implicates, that he doesn’t like Algebra, because otherwise, his utterance would not be relevant in the context. Conversational implicatures are traditionally seen as contrasting with entailments: They are not necessary consequences of what is said but are cancellable.
(c) A: Will Sara be at the meeting this evening? B: Her car broke down.
Here is another example of conversational implicature. B says her car broke down but, in this conversation, it implicates that Sara is unable to attend the meeting.
(d) John is poor but happy.
This is a typical example of a conventional implicature, where the word ‘but’ implicates a sense of contrast between being poor and being happy. The sentence (d) will always implicate the notion of “John is surprisingly happy despite his poverty.” with or without a certain context.
(e) Even Ken knows that’s stupid.
The sentence above also contains a conventional one. With the word ‘Even’, it implicates that Ken is the one who is hardly likely to know that ‘that’ is stupid.
2) Entailment
An entailment is a relation between sentence meanings or proposition and it is the necessary implication: an inference from an utterance which must be true if the utterance is true. (if A then B) Thus, it holds no matter what the facts of the world happen to be. As I mentioned above, entailment differs from implicature in that for the latter the truth of A suggests the truth of B, but does not require it. For instance,
(f) John’s been in Paris for three years.
The utterance implicates the inference that John is still in Paris at the time of utterance, meaning this is being understood as a continuative perfect. But that inference is not necessarily true. Meanwhile, this utterance entails the inference that John was in Paris at some point in the past: if John has never been in Paris, then the utterance itself is false. Take other examples below.
(g) Scott broke the window. (h) Brian and Jane went to the party.
(g) entails that the window broke and (h) entails that Brian went to the party. Again, both assumptions are true unless the utterances themselves are false.
(i) A: Everyone passed the exam. B: No one failed the exam.
Here’s an interesting example. A entails B and whenever A is true, B is true. The information that B contains is contained in the information that A conveys. Plus, a situation describable by A must also be a situation describable by B. And lastly, A and ‘not B’ are contradictory. As you can see from these interpretations, entailment is not only a pragmatic concept but instead, is also considered a purely logical concept.
댓글