To get the free app, enter mobile phone number. See all free Kindle reading apps. Don't have a Kindle? Grin Verlag 16 February Language: Be the first to review this item Would you like to tell us about a lower price? Share your thoughts with other customers. Write a product review. Get to Know Us. Delivery and Returns see our delivery rates and policies thinking of returning an item? See our Returns Policy. Visit our Help Pages. Audible Download Audio Books. Only one problem will be tackled, a central structural problem that can be discussed without raising the others.

If a word has fifty different meanings in a dictionary, how is an occurrence of the word related to just one of those meanings? It is contended here that the theoretical frameworks that linguists use are inadequate to solve this problem - or even to state the problem in such a way that a solution could be attempted. So there must be adjustments made to our theoretical perspectives, after which the problem can be restated. This view is now regarded as rather suspect Tognini-Bonelli , and it is to be expected that a new generation of dictionary will arise where the indexing is through the form and not the lemma.

Only when the meaningful units of a language have been reliably identified will it be useful to examine this matter thoroughly, and then, since the links between form and meaning will not have been broken, the task of description will be different from current work in semantics.

The problem in both cases is how to relate a finite resource to an unlimited set of applications; in the case of syntax the set of rules is finite and the set of sentences is not; in the case of lexis the set of meaningful items is finite and the set of meanings in use does not appear to be limited. Recursion is one of the simpler types of combination; a single rule-form in the grammar8 provides the link between finite and unlimited sets.

For the combinatorial relations of the lexicon, a more complex relationship needs to be defined. That is the purpose of this paper. A lexicon, or a dictionary, consists of a list of words, to each of which is attached a number of statements, features etc. The words are arranged according to the rules of grammar to make text.

This is because some aspects of textual meaning arise from the particular combinations of choices at one place in the text, and there is no place in the lexicon-grammar model where such meaning can be assigned. Since there is no limit to the possible combinations of words in texts, no amount of documentation or ingenuity will enable our present lexicons to rise to the job. However, the meaningful combinations can now be described in new ways which make them much more tractable. Obviously the second X can also be expanded, and introduces another X, and so on indefinitely.

See Bach's treatment of recursion. However this raises a further problem - when two words appear together in a text, how do we know whether they realise one meaning or two? The process known as tokenisation in computational linguistics is a relatively straightforward matter when the orthographic word can be trusted, but expands infinitely as soon as multi-word units are recognised. The application of powerful computers to large text corpora has begun to improve our methods of observation.

Reversal To give just one example of the inadequacies of a lexicon built by established methods, they do not take into account the common phenomenon of semantic reversal. Instead of expecting to understand a segment of text by accumulating the meanings of each successive meaningful unit, here is the reverse; where a number of units taken together create a meaning, and this meaning takes precedence over the 'dictionary meanings' of whatever words are chosen.

If the two meanings the one created by the environment and the item, and the one created by each item individually are close and connected, the text is felt to be coherent; if they do not, some interpretation has to be made - perhaps the meanings of the items are neutral with respect to the semantic demands of the environment, perhaps there is a relevant metaphorical interpretation, or an irony, or a very rare meaning of an item, or a special interpretation because of what the text is about at this point.

The flow of meaning is not from the item to the text but from the text to the item. In practice, the flow is rarely in one direction only. The textual environment will nearly always have some effect on the meaning of a unit, and the accepted features of the meaning will not often be totally ignored. Not only do they a only supply some of the meaning, but also b they often supply meaning components that do not fit the particular environment; in addition c there is no mechanism that I am aware of for adapting a lexical 10 Reversal at a propositional level is introduced in Sinclair b but not given a name.

The effects of reversals can be seen in dictionaries and lexicons when a word is frequently found in collocation with another, and this has an effect on the meaning. For example, white wine is not white, but ranges from almost colourless to yellow, light orange or light green in colour.

That is to say, the meaning of white when followed by wine is a different colour range than when it is not. This assumes that the user already knows roughly what colour white is when collocated with wine. Such examples are familiar enough. But when the word holy is interpreted as an abnormal mental state, as in the example The ambience borders on the holy There is no lexicon that sanctions such a meaning, and indeed, if there were, it would be a distraction with reference to most of the occurrences of the word holy.

The meaning is created in the collocation with borders on. Whatever follows this phrase indicates the limits of normality by specifying a mental state that lies just outside normality. When the adjective is obsessional, the feature of abnormality is already present in the meaning of the word, and the co-ordinated choice will be felt to be coherent; in the case of holy, the required feature has to be added by reversal. And if there were an instance of borders on the normal, this would be interpreted as fully ironical, suggesting that the normal is unexpected.

The way in which such a semantic problem is tackled is parallel to or even identical with a wide range of potential difficulties of interpretation; for example if in a conversation someone says Wasn 't that awful what happened to Harry? Or if you are reading a book that is outside your 8 JOHN SINCLAIR normal area of expertise, and come across items that the author assumes you know the meanings of - like abbreviations - you can either break off in your reading and consult a specialised glossary, or plough on with whatever understanding you can glean from reversal techniques.

After a while either your interpretation will break down altogether or you will survive the particular passage because none of the unresolved meanings are critical to your overall understanding. Theory Adjustment 'Reversal' is one of the new descriptive categories that will be required to account adequately for the data. The problem goes deeper, however, and requires some reorientation of theory. As a start, here are three hypotheses that, in various ways, cut across established theoretical norms and assumptions.

Without the acceptance of these hypotheses, or their refutation, progress towards a better account of meaning will be slow. I Language text is not adequately modelled as a sequence of items, each in an environment of other items. We normally accept an underlying model of language as 'itemenvironment'. At any point in a text we can interpret the occurrence of an item in terms of what other choices are possible, given the environment. Hence each item is both an item in its own right and a component of the environment of other items. This model must be examined carefully, because it seems inherently implausible.

Each item would have to be interpretable, simultaneously, as having many different meaning-relations with other items, equally multifaceted. As each item came to be processed for its contribution to the meaning of the text, every other item nearby would change not only its meaning, but the basis of its meaning - whether central or peripheral to the node in focus. This would lead to a huge multiplicity of meanings, and the need for a complex processor to relate them to each other, discard irrelevant ones, etc.

Further, since we have no reason to believe that the interpretation will proceed on a strict linear basis, the model would become extremely complex. Such complexity reflects the interdependence between words and their environments, and makes it clear that all the patterning cannot be described at once; some elaboration of the model is needed in order to disperse the complexity.

It could also take the form of erecting a hierarchy of units, a rank scale in Halliday's terms, where structural patterns of different types and dimensions are arranged in a taxonomy. Exactly what model will emerge is not possible to predict at present; my aim in this paper is to establish the need for it, and to put aside some well-respected assumptions about language that may be hampering our thinking - such as the imbalance at present in favour of the independence of the word, rather than its interdependence on its 'cotext', or verbal environment.

To put these notions aside is not to discard them for ever, or to attack their intellectual integrity, or the competence of those who uphold them; it is merely to suspend their operation temporarily to see if the picture that emerges from a rearrangement is more satisfying than the present one. This point is particularly important with respect to the second hypothesis that I would like to put forward: II Ambiguity in a text is created by the method of observation, and not the structure of the text.


  1. ?
  2. ;
  3. .

If a word is likely to be intricately associated with the words that occur round about it, then the consequences of studying its meaning in isolation are unpredictable. Dictionaries, which have little choice than to organise their statements of meaning around the word, present a picture of chaotic ambiguity. Words have many meanings, and there is no way of working out in advance which one is appropriate in a text.

However, if we extend the viewpoint to two or three words which is normal when lexicographers recognise a relatively fixed phrase much of the ambiguity drops away. Despite almost universal accord with the position that the environment of occurrence is important in text structure, every machine lexicon I know persists in starting with the inappropriate unit, the word. When such a 12 This can be made clear in a bilingual context; see Sinclair et al. Having created quite fictitious ambiguities, the researcher then multiplies them with similarly complex possibilities for the next word, and the next Here is a superficial example.

In this desperate situation, there arises a need for sensitive algorithms to filter out all the meanings that do not apply in the particular instance, all except one. The mess is so serious by now that this cannot be achieved by automatic process alone; humans must be trained and employed to clear it up. This process must be compared with the normal linguistic activity of an ordinary person.

All day long, effortlessly, this person interprets passing sentences, usually correctly, and often against a background of high levels of distraction. He or she is not even aware that any of them are potentially ambiguous. The human, perhaps, works with a better notion of meaningful units, and does not encounter ambiguity. The ambiguity that is studied is evidenced in carefully contrived short utterances, often of a kind which would be very unlikely to occur in texts. This possible objection 13 I hope nobody actually does this, because I have deliberately exaggerated the problem to show how close it is to absurdity.

Contrastive lexical semantics

Accidental ambiguities, that may be irresolvable even when the viewpoint is extended to the limits of practicality, are bound to arise, but very occasionally indeed; linguistic communication would be severely strained if it was more common than the sort of coincidence that happens once or twice in a lifetime.

Ambiguity above the phrase - at propositional or pragmatic levels - is no more common than ambiguity at word level, but needs separate treatment which would not be appropriate here. A representative account of the current NLP positions on the topic can conveniently be found in Monaghan b. Most of the types presented there can be classified as one of: In other words, none of these count as ambiguities that have to be resolved in language description. Indeed, it can justifiably be claimed that a model of language is inappropriate if it obliges the description to make distinctions in a particular text segment which are not necessary for the interpretation of the text as a whole.

III The form of a linguistic unit and its meaning are two perspectives on the same event. There may be relevance, for example, to research in cognitive psychology. It is accepted that form and meaning are very closely related, and that variation in one normally leads to variation in the other. As has been argued in the case of words, syntactic patterns may seem to vary independently of the meaning, as long as the cotext is kept to a minimum. So if active and passive constructions are presented bereft of cotext, their similarity of meaning "who did what to whom" is highlighted, and their differences, which show in the higher organisation of the discourse, are not obvious at all.

Wörter, die du NICHT benutzen solltest

The position adopted in this paper is intended to be in sharp contrast to the approximateness of the traditional view. It is asserted that form and meaning cannot be separated because they are the same thing. Considered in relation to other forms, a lexical item is a form; considered in relation to other meanings, it is a meaning. It follows from this tighter statement that ambiguity must in practice be very close to zero, or the statement would have to be seriously weakened.

Also, the form of a lexical item must include all the components that are realised in the example. Meaning cannot inhere satisfactorily in just a selection of the components of an item when there are other components left in the cotext, but requires them all to be assembled together, and a way of stating the structure of an item has to be devised. It follows from the requirement that all the components of a lexical item must be included in its specification, that these genuine meaning-bearing items will have very little connection with their cotexts; all the choices that depend substantially on other choices will be grouped together in the item, and the text will be represented essentially as a succession of relatively largescale and independent choices.

No doubt the reality will be a good deal more complicated than this sketch supposes. Discontinuity of lexical items is a strong possibility, and various kinds of embedding cannot be ruled out. Other choices, which may also be structural, are less obvious in the data studied at present because they are sporadic with reference to the pattern of choices. The Axes of Patterning In order to restate the problem of meaning, we must draw some general implications from the three hypotheses that have been discussed above. The main one, which pervades the whole argument, is that the tradition of linguistic theory has been massively biased in favour of the paradigmatic rather than the syntagmatic dimension.

Text is essentially perceived as a series of relatively independent choices of one item after another, and the patterns of combination have been seriously undervalued. It is easy to understand how this has happened; once again it depends on the nature of the observations and the stance of the observer.

The difficulty has been how to cope with the large range of variation that is apparent in most uses of language. This then obscures most of the structural relevance of collocation, and removes any chance of the precise alignment of form and meaning. It also presents the semantic level with the kind of problems that this paper is discussing. The opportunity to observe recurrent patterns of language in corpora has shown how choices at word rank co-ordinate with other choices round about in an intricate fashion, suggesting a hierarchy of units of different sizes sharing the realisation of meaning.

The largest unit will have a similar status to the sentence in grammar and may coincide with sentence boundaries in many instances in that it will be relatively independent of its surroundings with respect to its internal organisation see the distinction between rank and level in Halliday Meaning appears to be created by paradigmatic choice; this is within the orthodoxy of most theories, whether or not it is explicit.

This perception also relates meaning to the information of Information Theory. However, the mechanism of paradigmatic choice is so powerful that constant vigilance should be exercised to make sure that it is not misapplied. Sometimes in the actual use of language there is less choice than the paradigm is capable of creating, as in the example of counting chickens given earlier; in these cases to present the paradigm unqualified is to distort the description by claiming more meaning in an expression than is actually usable.

By giving greater weight to the syntagmatic constraints, units of meaning can be identified that reduce the amount of meaning available to the user to something more like his or her normal experience; the balance between the two dimensions will more accurately represent the relation between form and meaning. Such a conclusion calls for nothing less than a comprehensive redescription of each language, using largely automatic techniques.

Problems remain, particularly one concerning the inability of the paradigmatic and the syntagmatic dimensions to relate to each other.


  • ?
  • Black Diamond Kush?
  • .
  • .
  • Priestley: Plays One: We Are Married, Mr.Kettle and Mrs.Moon, Labur.
  • .
  • The Water Footprint Assessment Manual: Setting the Global Standard.
  • They have no contact with each other, they are invisible from each other, and to observe one, the other has to be ignored. The phenomenon is similar to the observational problems that led to Heisenberg's famous Principle of Uncertainty in atomic physics. An atom can have both position and momentum, but these cannot be observed simultaneously, because the techniques for observing one cut out the possibility of observing the other.

    Similarly, a word gives information through its being chosen paradigmatic and at the same time it is part of the realisation of a larger item syntagmatic ; in order to observe either of these, however, we lose sight of the other. Unless the requirements of the cotext are precisely stated, the word as a paradigmatic choice will be invested with far too much independent meaning; on the other hand when observed purely as a component of a larger syntagmatic pattern, it can have very little freedom, and therefore can give very little information; it might be no more meaningful than a letter in a word, serving only the purpose of recognition.

    A means must be found of relating the two dimensions in order to give a balanced picture. We are now in a position to restate the problem of meaning in tractable terms by means of the following hypothesis: IV The meaning of a text can be described by a model which reconciles the paradigmatic and syntagmatic dimensions of choice at each choice point.

    The model is set out in a preliminary fashion in Sinclair a. Five categories of co-selection are put forward as components of a lexical item; two of them are obligatory and three are optional. The optional categories realise co-ordinated secondary choices within the item, fine-tuning the meaning and giving semantic cohesion to the text as a whole.

    The optional categories serve also as a means of classifying the members of a paradigm, and thus the two axes of patterning, the paradigmatic and the syntagmatic, are related; the relationship is in principle capable of automation, and is quantifiable.

    Product details

    The three categories that relate words together on either dimension are collocation, colligation, and semantic preference. The first two are Firth's , terms. Collocation at present is the co-occurrence of words with no more than four intervening words17 given the arguments of this paper, the word is no more reliable as a measure of the environment than it is as a unit of meaning, so this measure will have to be revised, but it is at present the only measure in general use.

    On the syntagmatic dimension, collocation is the simplest and most obvious relationship, and it is fairly well described. On the paradigmatic dimension it is defined rather differently, because items can only collocate with each other when present in a text, and two items in a paradigm are by that arrangement classed as mutually exclusive. The relationship is that of mutual collocation, i. So whereas manual and restoration are both significant collocates of work, they themselves do not co-occur significantly. Colligation as a paradigmatic concept is displaced, like collocation, to that of a mutual relationship; so a possessive may colligate with a particular noun, and the so-called 'periphrastic' construction, the At that time corpora were very small, so the figure was recently recalculated with reference to a much larger corpus, and with surprisingly little change.

    Oliver Mason programmed the recalculation, and calls it lexical gravity; it is a function in the CUE system of corpus query language. This feature is relevant in the same way to both syntagmatic and paradigmatic phenomena. The three categories are related to each other in increasing abstraction; collocation is precisely located in the physical text, in that even the inflection of a word may have its own distinctive collocational relationships. Within the abstraction of the word class, of course, there may be one or more collocations. There may also be collocates, specific recurrent choices of word forms carrying the semantic preference.

    Example The word budge in English poses a problem for dictionaries; for example: Budging is the overcoming of a resistance to movement, and so it concentrates on the beginning of movement. Even a little movement constitutes a budging, which might explain the definition above, without justifying it, because something once budged i. The point is that English does not talk much about budging at all, but about not budging, where the quantity of movement is irrelevant. The two examples that follow the definition above are indeed both negative, but the entry reads as if the lexicographers had not noticed this primary fact of usage, and the user, of course, has no idea how to evaluate the presence of the negatives since they are not referred to.

    It would be difficult to find an instance of this word which is semantically positive. The figure see next page gives 31 instances from a corpus, all that there are in almost 20 million words. Of these I propose to 1. But Mr Volcker has yet to 3. I would n't 8. We wo n't none of us be able to 9. The virus fanciers refused to It did n't It was a dismissal. Bonasera did not You ca n't The humanity here just refuses to When it did not He knew he could n't They wo n't 3 0.

    He would n't budge budge budge budge budge budge budge budge budge budge budge budge budge budge budge budge budge budge budge budge budge budge budge budge budge budge budge budge budge budge budge. In that case , the Vice-President on changing his controls over dornest. It was rumoured to be make-work to me from my view. He keeps banging it on the head wi either , or come back , till a boy w tomorrow. Whatever the diagnosis , my recove from it until closing-time. She wrote a cheque for more th. I go to the backdoor. I find that - " What 's done can not be undone.

    Hastily , I looked round for a bel , even in the mating season. I ofte , so that no one else budged , and s. Finally , sighing , a good-hearted him , not even with money. He has b above minus twenty. Often to everyone ' s great relief on that. In the firrst place , d '' on design principles he knows to be. He walked to the alcove from their immigration policy. In m from that position. Most of the others show the word to in this position, and by examining the word previous to that, there is a strong collocation with forms of the lemma refuse - nine in all.

    Although not a grammatical negative, refuse can reasonably be considered as a lexicalisation of the kind of non-positive meaning that characterises budge. There are, then, just five remaining instances that do not follow one of the three prominent ways of expressing negativity. The two remaining instances show neither grammatical nor lexical negation; the second line expresses the refusal aspect with has yet to - implying that Mr Volcker refuses to budge, and the third line draws attention to a presumably long and unpleasant period preceding eventual budging the extended cotext of this line is so deep with caustic dirt that skin would come off scrubbers' hands The negative quality of the phrase centred around budge is thus expressed in different ways, but with a predominance of collocations refuse to and inflections , wouldn't, didn't, couldn't.

    Colligation is with verbs, with modals including able to accounting for half the 30 instances. From this point I will not attempt to describe comprehensively the two instances above that imply rather than express negativity lines 2 and 3. When the habitual usages of the majority of users are thoroughly described, we will have a sound base from which to approach the singularities, which may of course include much fine writing. Budge is an ergative verb, in that whatever is to be moved may figure either as subject or object of the verb; subject in an intransitive clause, and object in a clause where the subject is the person or thing making the attempt to move.

    A guide to these alternatives can be found by looking at whatever immediately follows budge. Twelve times there is a full stop, twice a comma and once a dash - fifteen instances in all, or half of the total. Refusal is ascribed to whoever or whatever is not budging won't, wouldn't Of the first type, there are twenty instances expressing refusal or interpreted as implying it, all intransitive; where the subject is non-human and the verb modal a snake, a quotation and a thermometer we anthropomorphise which is a kind of reversal. Of the second type, there are four examples; the non-budging is ascribed to inability, the 'agent' is in subject position, the clause is transitive and the person or thing that is not budging is named in the object.

    One instance has indications of both possible reasons; the modal cluster is won't be able to, and the clause is intransitive. This is a prediction of a future inability to budge, and won't does not indicate refusal. There remain five instances, of which four are didn't or did not. This usage is neutral with respect to refusal and inability; the structure is intransitive and so suggests that an agent is not important, but a person energetically trying to move a physical object is apparent in adjoining clauses in three of the cases.

    In the fourth the subject of budge is a person; the cotext makes it clear that he is under pressure to move, so it is closer to refusal than inability. In the transitive instances the non-budging item is object, the agent of movement is in the subject, the semantic preference is inability and there is strong colligation with the modals of ability. A minor optional element of the cotext of budge is the expression of the position from which there is to be no budging. There are eight instances, all beginning with a preposition; from four times, on twice, and above and off once each.

    Most are of the 'refusal' type. We consider why people use this word, why they do not just use the common verb move, with which any use of budge can be replaced. Something does not budge when it does not move despite attempts to move it. From the perspective of the person who wants something moved, this is frustrating and irritating, and these emotions may find expression, because this is the 'semantic prosody' of the use of budge.

    The semantic prosody of an item is the reason why it is chosen, over and above the semantic preferences that also characterise it. It is not subject to any conventions of linguistic realisation, and so is subject to enormous variation, making it difficult for a human or a computer to find it reliably. It is a subtle element of attitudinal, often pragmatic meaning and there is often no word in the language that can be used as a descriptive label for it. What is more, its role is often so clear in determining the occurrence of the item that the prosody is, paradoxically, not necessarily realised at all.

    But if we make a strong hypothesis we may establish a search for it that will have a greater chance of success than if we were less than certain of its crucial role. For example we can claim that in the case of the use of budge the user wishes to express or report frustration or a similar emotion at the refusal or inability of some obstacle to move, despite pressure being applied. Then there is an explanation for even and yet, and other scattered phrases from the immediate and slightly wider cotext of the instances. A selection of the evidence for pressure and frustration is given below, with reference to the figure.

    The evidence is probably enough to convince many human readers that the prosody exists and is expressed, implied or alluded to in most of the instances. This amorphous collection is an unlikely starter for being related to a structural category, and yet the claim is made that it is the most important category in the description. Without a very strong reason for looking, a computer would find virtually no reason for gathering this collection, but if we can predict a structural place for it, then at the very least the computer could pick out the stretch of language within which a prosody should lie, and whose absence was as significant as its presence.

    The core gives us the starting point, in the case of budge one that anticipates the prosody fairly clearly; the optional patterns of collocation, colligation and semantic preference bring out relevant aspects of the meaning, and the prosody can then be searched for in the close environment. It is not surprising that this is a very common structure in language, because it allows the flexibility that was identified earlier in this paper as essential for an adequate lexical item.

    The prosody is normally the part of an item that fits in with the previous item, and so needs to have virtually no restriction on its formal realisation, whereas the core, often in the middle or at the end of an item, is buffered against the demands of the surrounding text so that it can remain invariable. An item of this shape and structure makes it possible for the lexicon to have finite entries which are adequate to describe the way the meaning is created by the use of the item.

    In this lengthy description of the lexical item whose core is NEG budge I have not had reason to make a distinction that most lexicographers would regard as primary - the literal and figurative uses of the word. Where the option to express position is taken, the preposition on seems to be restricted to the figurative use, while from occurs with both.

    Louw personal communication argues that 'literal' and 'figurative' are points close to the extremities of a continuum of 'delexicalisation'. Words can gradually lose their full lexical meaning, and become available for use in contexts where some of that full meaning would be inappropriate; this is the so-called figurative extension. Current models do not overcome the problem of how a finite and rigidly formalised lexicon can account satisfactorily for the apparently endlessly variable meanings that arise from the combination of particular word choices in texts.

    I have suggested that the word is not the best starting-point for a description of meaning, because meaning arises from words in particular combinations. The term 'lexical item', used to mean a unit of description made up of words and phrases, has been dormant for some years, but is available for units with an internal structure as outlined above. The lexical item balances syntagmatic and paradigmatic patterns, using the same descriptive categories to describe both dimensions.

    The identification of lexical items has to be made by linguists supported by computational resources, and in particular large general corpora. The impact of corpus evidence on linguistic description is now moving beyond the simple supply of a quantity of attested instances of language in use. It is showing that there is a large area of language patterning - more or less half of the total - that has not been properly incorporated into descriptions; this is the syntagmatic dimension, of co-ordinated lexicogrammatical choices.

    Acknowledgements The instances of language in use that I quote in my work come from a number of sources, in particular The Bank of English in Birmingham and The British National Corpus. I am grateful for permission to make use of these corpora. Such features as collocation are part of the control mechanism available to the writer. So in this example the use of several other words that could be interpreted literally keeps available the physical meaning of budge, while the overall interpretation of the passage will be institutional.

    An Introduction to Transformational Grammars.

    Holt, Rinehart and Winston. Aspects of the Theory of Syntax. Papers in Linguistics , Selected Papers of J. Language Applications for a Multilingual Europe. Essays in honour of Michael Halliday ed. Talking about Text ed. Le Trasformazioni del Narrare ed. An account of the Cobuild project in lexical computing.

    Report to the Office of Scientific and Technical Information. Collins Cobuild English Language Dictionary. A study of translation equivalence". International Journal of Lexicography 9: An Introductory English Grammar. I will briefly mention two major changes of view. First, we have left structural linguistics with its autonomous areas. Accepting the pragmatic turn we have to face the complex fact that our subject matter, language, turns out to be language use in dialogic interaction. Lexical semantics represents only one component in the new communicative unit of the action game cf.

    Under such changed conditions we have to pose once again the question which is essential for every discipline, the question of how our description can be verified. I will not go into details but only mention an essential point that leads me to the conventionalist position in general. If we consider language as dialogic interaction in which not only the speaker is engaged, there must be, in my opinion, conventions of cooperation which are the basis for the dialogic process of coming to an understanding. Everything depends, however, on how we understand the term 'convention'.

    By contrast, 'conventions', following etymology, are in principle dependent on the user or on groups of users. In the final analysis, conventions are not definite, they may always be overridden by specific circumstances. Rules of morphology, for instance, are fixed, defined, to a large extent user-independent. In this fixed sense, too, the term convention was used in structural linguistics for the relation between the signified and the signifier of the sign. Such a concept of sign, however, does not lead us very far in pragmatics. Conventions as an orientation for a group of users are mainly based on choice as Lewis has told us and as we have seen in various ways by studying dialogic action.

    Conventions are either valid for a set of cases or for specific single routines. For instance, you may use the adjective thick in a variety of cases - thick wall, thick slice, thick carpet, etc. In this sense of restricting free choice, the term convention comes near to the term rule. It is, however, different in so far as a rule always covers a set of cases. The next question we have to tackle is the question of whether language is totally governed by rules and conventions as is assumed in structural and generative linguistics, or whether we have to go beyond rules and even beyond conventions.

    As long as we were dealing with language as an abstract system, our subject matter was by definition governed by rules. Also some pragmaticians, who deal with language in use, keep strictly to rules and conventions. I remember Searle's speech act theory and the model of dialogue grammar to which I referred in my book "Sprache als Dialog" Weigand Recent research, however, has taught me to include yet another aspect. On the other hand, research on misunderstanding and emotion confirms that language use cannot be described along the lines of mathematical-logical models Weigand b.

    We have to accept two things if we do not want to be restricted by artificial models: Second, as a consequence, our model can no longer be a doubled speaker model. Consequently, language use can no longer be described by a closed system of rules and conventions. How can such a complex subject matter be approached? We are in need of a methodological technique that allows us to come to terms with cases of probability. Such a technique, in my opinion, can be seen in the concept of principles used by the interlocutors.

    As interlocutors we guide ourselves by principles in social interaction. Principles give us a line of orientation and help us to come to terms with the multiple and sometimes indefinite aspects of the action game. On the basis of the Action Principle and the Dialogic Principle there are various other principles such as the Sequencing Principles, the Principle of Suggestion, the Rationality Principle, etc. The model of dialogic action games therefore will be an open model that contains not only the definite and quasi-definite of rules and conventions but also the indefinite and cases of probability on which in the end dialogic action is based.

    Special offers and product promotions

    Now we have to ask for the methodological technique that is relevant for lexical semantics as a part of language use. Speaking of language use does not necessarily mean that we refer to performance or spoken language. We refer to the level of communicative competence that guides language use, be it in the written or spoken medium.

    There remains the problem of how we can discover these conventions. More concretely, this is the question of how we can verify our semantic analyses. In the past, we mainly relied on dictionaries and on questioning native speakers. They remain valuable sources, but alone they are not satisfying. We can no longer be content with the position that leaves semantics to introspection and intuition. Results in semantics should be controllable, reliable, repeatable.

    I share Sinclair's conviction that we have to find "hard, measurable evidence" according to the criterion of frequency Sinclair Hard, measurable evidence can be gained from large text corpora by the use of advanced computer technology. These lists still have to be analysed within the framework of a model of language use.

    In a strict sense, there is no empirical evidence as such. We have to pose the questions that are indicated by our model. That means that we have to analyse the use of words according to our model and to check our results by the corpus. As linguists we consult dictionaries and ask native speakers when analysing foreign languages according to our model.

    In the end, we have to check our assumptions about semantic conventions against representative text corpora. Analysing vocabulary in lexical semantics on the basis of text corpora is different from describing units of the lexicon in a generative model. Semantics in a pragmatic model Concentrating on a model for interlingual lexical analyses, we have first to clarify how to deal with lexical semantics in general, second we have to establish a level of comparison for contrastive-interlingual analyses.

    After the pragmatic turn, lexical semantics in general has lost its autonomous status. It has become part of language use and is integrated into the functional structure of the utterance which can be represented by Searle's formula F p and be seen as the essential basis for a theory of language use. Within a pragmatic model meaning in general will be a pragmatic concept, namely the purpose for which expressions are used by the interlocutors. We know the purpose for which utterances are used and we have called it action function.

    The action function represents the superordinate function F which dominates the propositional function p. We may distinguish semantics proper or propositional semantics as part of the utterance being dependent on the action function. As we already know from Aristotle, propositional semantics can be divided into reference and predication. We therefore have at the very beginning three fundamental types of meaning: The next problem we have to tackle is a difficult one.

    We have to find out how the linguistic means are correlated to these fundamental types of meaning. As I have explained elsewhere cf. Even in the case of speech act verbs, for instance, in utterances like I predict that Thus we have arrived at a level where we can analyse vocabulary as an area which is to some degree independent of the other parameters of the action game.

    It remains to be clarified what 'predicate' means. In a model of dialogic action games human beings or their language cannot be differentiated from the world; on the contrary, human beings are seen as part of the world. The world can only be experienced by means of human abilities. By means of lexical expressions, the words-in-use, human beings predicate how they see the world to which they belong. The predicative function is, in accordance with the formula F p , subordinate to the action function. By integrating lexical semantics functionally into a comprehensive pragmatic theory in this way, we get round the problem caused by the structuralist assumption of autonomous levels.

    Abstraction however cannot be arbitrary as is the case when structuralists jump from 'parole' to sign system and thus totally lose their initial subject matter. Abstraction has to follow criteria. Even intending to abstract from the feature 'not repeatable', we nevertheless intend to keep to language use. Thus we arrive at the underlying level of our communicative competence. Language use does not mean that there are autonomous texts which are sent like packages or complex signs from speaker to hearer as is represented in simple communication models.

    Communicative means are not only linguistic means but are based on all human abilities together. How could we assume that we use our ability to speak separately from our ability to think and add a third ability to see what goes on in the speech situation?

    We are dialogically acting and reacting in the action game Weigand , , Having assumed three fundamental types of meaning, we are now faced with the problem of structuring in more detail the components of each type. Concerning action function and reference function, I will restrict myself to indicating in a figure from Weigand a: Model of the correlation between meaning and expression side of the utterance For our pragmatic notion of meaning as the purpose for which expressions are used, it is interesting to see that the correlation between meaning and expression side is not guided by a fixed relation of correlating expression and meaning within the sign but by a principle of meaning equivalence used by the interlocutors.

    It remains to be clarified what the units on the expression side are cf. Though the two-level-model of Bierwisch, Lang, Wunderlich and others cf. Starting from an individual language and analysing it semantically results in a semantics of that language. Thus we have meaning dictionaries of the English language, of the French language and so on. If we intend to correlate expressions of different languages we have to assume some sort of equivalence or interlingual synonymy between these expressions.

    The concept of equivalence cannot be simply stated by intuition or native competence, it must be defined by semantic structure, which is nothing other than making competence transparent and explicit. If we deal with expressions from different languages, semantic structure must be a structure which is valid for more than a single language.

    We take this quasiuniversal structure as the common basis for our contrastive analyses with reference to which we can decide what counts as equivalent expressions. In accordance with our position that there is no independent reality, only reality in the eye of the observer, the universal structure contains the ways by which human beings perceive the world, not the structure of the world itself.

    Structuring the world itself would be an impossible undertaking because there is no definite criterion, independent of human beings, according to which a specific structure could be laid down. Relating structure to human beings and their abilities gives us criteria for distinguishing different areas. In so far as the ways by which human beings perceive and act in the world depend not only on general human conditions but on special social conditions as well, it might indeed be the case that, starting from a universal level, meaning positions, for instance, of social behaviour have to be introduced that are valid for specific cultures only.

    In so far as universal structure is interesting for us not on its own but as a level on which different languages can be compared, its units have to be considered as heuristic units without any ontological status. Just as there is no independent reality, neither there is ontology. The fact that we consider 32 EDDA WEIGAND universal structure to be a heuristic structure and not an independent notional or ontological structure constitutes a fundamental difference to onomasiological models such as that by Wierzbicka Such concepts are artificial ones and the thesis of their existence remains vacuous.

    In contrast to Wierzbicka, we introduce universal semantic units by referring to human abilities of perceiving and acting in the world and by looking at the ways in which we express our doing in different languages. As a consequence, the problems we have to solve refer on the one hand to the structure and units of universal semantics, and on the other to the ways-of-use in different languages: Contrastive grammar of predicating 4.

    Structuring the lexical-predicative part In our model of dialogic action games, everything has to be related to human beings and their cognitive and physical abilities. The structural attempt to establish and structure lexical fields independently was not able to achieve its purpose because a reference point for comprehending and dividing the whole was missing. Tracing the complexity of the world back to its centre, human beings, makes it possible for us to structure the whole picture. We assume as a working hypothesis and a starting point the following predicatingfields: Universal predicating fields These fundamental fields are intended to comprehend all the types of human abilities which are the basis for predicating.

    We therefore consider them to be the cognitive basis of our theory of lexical semantics. Fundamental fields are divided into partial fields and these are structured according to predicating positions which are considered to be the minimal units of meaning or the meaning positions in the lexical area. Similarly, ACTION comprises the whole field of intentional action which might be practical, linguistic, physical action as well as mental, visual and other intentional activity. Having arrived at partial fields of fundamental abilities, we have to come to terms with the problem of giving structure to these partial fields.

    At this level, we have to introduce predicating positions as minimal heuristic units of lexical meaning. In characterizing them as heuristic units, we dissociate ourselves - as already mentioned - from approaches which are based on independent semantic primitives.

    In contrast, we try to establish universal structures by starting from human abilities and relating to them what seems to us essential semantic concepts and configurations. These predicating positions might in part be established by theoretically reflecting on possible semantic configurations, in part however they are empirically established not only by differentiating semantically between the expressions of a single language but also by comparing the vocabulary of different languages. The remainder of meaning is left to the interlingual relation of equivalence or quasi-synonymy between expressions of different languages.

    In contrast to 'semes' and 'atomic predicates', which can be considered as units of semantic decompo- 34 EDDA WEIGAND sition starting from signs, predicating positions represent units of semantic composition starting from human abilities.


    • Protocols for High-Risk Pregnancies!
    • ?
    • .
    • Blood Feather.
    • Besides analysing the meaning of signs into minor units by semantic decomposition, there is another way of indicating meaning, namely by means of paraphrases or synonyms which we find in monolingual dictionaries such as the Cobuild dictionary, even if it stresses the notion of use. Both ways are intended to explain meaning. Both start from expressions of individual languages, which is not the best starting point for contrastive semantics. Having established meaning positions on a universal semantic level, we no longer need paraphrases or synonyms as meaning classifiers.

      We include them on the expression side as ways-of-use. Units on the expression side Having introduced meaning positions as units on the semantic level, we now have to tackle the problem of what the units on the expression side are. We admit that notional systems can be constructed from single words, but this is not our object.

      Our object is to find the minimal lexical units that function on the level of utterance, i. In language use, we do not take single signs and insert them in syntactic positions of the sentence according to rules of compositionality. On the level of utterance, we find syntactically defined phrases, i.

      These syntactically defined phrases are the lexical units of our communicative competence. We are ill at ease when confronted with ways-of-use that are not wellestablished and we feel relieved when we arrive at a conventional phrase stored as such in our memory. Therefore, in our pragmatic model, the units on the expression side are not single words but words-in-use or ways-of-use. These ways-of-use have a different syntactic status. The Determiner in some cases does not seem to play a role on the lexical level, in general however it cannot be neglected as we will see in our analyses in the article "The Vocabulary of Emotion" below.

      There are NPs whose components seem freely combinable. The NPs therefore have the status of examples: The "principle of free choice", as Sinclair We simply have to list the idiomatic ways-of-use individually and completely. In this sense, essential features of our mother language become evident by comparing it with a foreign language. Like Sinclair in this volume we take as lexical units 'multi-word' expressions. Whereas Sinclair tries to gain these complex lexical units automatically and formally from the corpus considering them as words in the context of other words which occur before and after them in the corpus, we define them functionally and syntactically as ways-of-use or phrases.

      These ways-of-use are the lexical units that function on the level of the utterance in so far as they are the units that have predicating function. On the level of syntax they are the units from which the utterance is built up. The category of parts of speech represents a distributionally defined sub-part of ways-ofuse, no longer the main syntactic reference point.

      We started in universal structure from human abilities and did not represent objects. Human abilities refer to objects which might be concrete visual objects in the case of visual perception or abstract objects in the case of rationality and reason. In this way we can include objects, too, by attaching them to the respective abilities. The fundamental concepts 'space' and 'time' could also be assigned to human abilities such as 'awareness', 'motion', and 'rationality', as is hinted at by Klein , when he distinguishes between 'normal perceptual space' on the one hand and 'various types of space', which are to be thought of as more abstract, on the other cf.

      Reflecting on the different expression types of vocabulary, we have, however, to take account of the difference between adjectives and nouns cf. Adjectives characterize, predicate, and do nothing else, but nouns predicate and additionally are able to refer, i.

      Contrastive lexical semantics - PDF Free Download

      Reference, however, is not the function of nouns, it is the function of NPs within the whole utterance, i. In so far as objects of referring, for instance, child, danger, eyewitness, awareness, etc. Now one might object that one could nevertheless start with single words and indicate precisely their collocations.

      This technique results in a list of collocations, ways-of-use, which are subsumed under a single word. Even if we can thus achieve a valuable alphabetically ordered index of the words analysed, in theoretical respects such a technique has to be judged as problematic. First, this way of representation is theoretically deceptive in so far as it obscures the fact that it is not the single word that contains all the meanings listed below.

      These meanings are a result of the way-of-use in which they occur. They are no longer readings of a polysemous sign but the meaning of a complex lexical unit which normally is no longer polysemous. We know their meaning when referring to ways-ofuse. Second, it is impossible to indicate all paraphrases and synonyms, either mono- or multilingually. There is no way from thick to heavy if you refer only to single words. The same is valid for different languages.

      Again, the reference point has to be the way-of-use. Universal structure is correlated to ways-of-use in different languages. Because of the syntactic definition of ways-of-use, syntax finds its place in the description of language use. It becomes possible to start from universal semantics and to construct utterances of individual languages out of ways-ofuse. The model can be called a cognitively based theory of use which relates cognition to use. The principle of meaning equivalence As we know, formal and semantic structure are not directly correlated; each of them has its own principles that do not fit 1: As is the case with the action function and the referential function, in the predicative field, too, it is the principle of meaning equivalence of expressions that establishes correspondences between configurations of meaning positions and units of expression considered as ways-of-use.

      Principle of meaning equivalence Normally not only one way-of-use will be correlated to a specific meaning structure but a few or several ways-of-use are considered equivalent, such as 38 EDDA WEIGAND thick and heavy in thick forest, heavy traffic or dick and dicht in dicker Verkehr, dichter Wald. We have already stressed that universal structure in our model represents only a heuristic structure. Thus we are not obliged to differentiate meaning to the finest degree. Consequently, we assume a relatively rough notion of equivalence.

      The problem of further differentiation can also be considered a problem of the level we are addressing.