Papers and Talks
Section 3 -Evening Talks by Faculty


THE CHARECTERISTIC FEATURES OF THE DRAVIDIAN
LANGUAGES

P.S.SUBRAHMANYAM

(Annamalai University, Annamalainagar)

This paper attempts to present the characteristics features of the Dravidian linguistic family that distinguish it from the Indo-European, family or one of its branches, the Indo-Aryan family of languages. Although it is mainly based on Robert Caldwell's remarks in his book, A comparative Grammar of the Dravidian or the South Indain family of Langauges, it contains some additions to and modification of his views in the light of modern research.
1. All vowels in the Dravidian languages have the contrast between short and long. This is true also of the vowels e and o of which the short variety is missing in early Indo-Aryan. The retroflex consonants are quite common in Dravidian; their presence in Indo-Aryan is attributed to the influence of Dravidian by many scholars (see Caldwell 1856:147f, Jules Block 1965, M. B. Emeneau 1956:7). The Proto-Dravidian language is charectirised by the presence of six series of stops in contrast to the five series of stops of Indo-Aryan. The peculiar alveolar stop of P Dr has latter merged with earlier the dental or the retroflex (or in a few languages like Tulu, Kui-kuvi with the palatal) stop; this change seems to be induced by the Indo-Aryan languages.
2. The gender systems in Dravidian fall under three types. The two languges, Tado and Brahui have lost gender distinction in later periods.

Type I: All SDr languages except Toda.

M.sg. Hum. Pl.
F. g.
N.sg. N.pl.


Type II: Telugu and kur?ux-Malto (in the latter two languages, neuter plural is absent)

M.sg. Hum pl.
F & n.sg. N. pl.

Type III: All C Dr languages except Telugu.

M.sg. M.pl.
F & n.sg. F & n.pl.

It is by now established that of that three, the second type with optional neuter plural represents the P Dr gender system (for details, see P. S. Subrahmanyam, 1969 a).

3. In Dravidian the order of the various suffixes in a noun, when they are present, will be as follows:


Case suffix
N+(Plural)+(Oblique)+ Post position

The oblique base formation and the usage of post-positions is a speciality of the Dravidian linguistic family. In Indo-European the case suffixes differ according to the number (sg, dual or plural) of the noun and in it is difficult to separate the number and the case suffixes. Modern Indo-Aryan resembles Dravidian rather than Sanskrit in this respect.
4. The neuter plural suffix in Dravidian is an optional category in both S Dr and N Dr. It is obligatory only in C Dr and there it seems to be an innovation. This is so in both nouns and finite verbs. In Tamil, for example, there is no number distinction at all in the future tense neuter forms, e.g., irukk-um 'it/they (neut.) will be'.
5. The adjectives in Dravidian are indeclinable unlike the adjectives in Indo-European.
6. Most of the Dravidian languages have two forms for the 1st person plural pronoun, one for the exclusive and the other for the inclusive. The P Dr forms for these are respectively *yam and *nam. A few modern Indo-Aryan languages like Marathi and Gujarati created such a distinction in them due to the influence of the Dravidian languages. (For details, see Subrahmanyam 1967-68).
7. The verbal adjectives play an important role in Dravidian. They have the following structure:
Verb stem-Tense Marker-Adjective Marker (*a/*i) The IE family or its branch, the Indo-Aryan do not have a category similar to that of Dravidian. In Dravidian they are found usually in three tenses, i.e., past, non-past and negative.
8. The adverbial participle or gerund is another peculiarity of Dravidian. It consists of the stem and the past tense marker (followed by te enunciative vowel u if ending in a consonant). The frequent usage of a similar category in Indo-Aryan is attributed to the influence of Dravidian (see M. B. Emeneau 1956).
9. Most of the Dravidian languages have means to express negation in the verb by using a special suffix, e.g., old Ta. ceyy- Ø-en, Te. ceya-nu, Go. Kiyy-o-n 'I will not do'. Excepting kur?ux and Malto, all other Dravidian languages have this construction; those two languges might have lost it a later period. The structure of the negative verb is as follows:
Verb stem-Negative Marker-Personal Suffixes.
It is to be noted that a negative adverbial participle as well as a negative adjective participle are also there in most of the languages.
Furthermore, a number of Central Dravidian Langauges have a morphological construction to express the past Negative. Its structure is:
Verb stem-Neg. Marker=Past Marker-Person Marker. It is found to occur in Konda, Kui-kuvi, Kolami, Naiki, Parji and Gadaba. E.g.

Kolami si-e-t-an 'I did not give'
(For details, see Subrahmanyam 1969b).

10. Lastly, the Dravidian languages are distinguish from Indo-European by the absence of these two categories:
(1) A Passive Voice at the morphological level, and
(2) Relative Pronouns, the function of which is carried out by the relative verbal participles.

References

Bloch, Jules, 1965. Indo-Aryan from the Vedas to Madern Times (English Tr. By Alfred
Master). Paris.
Caldwell, Robert, 1856. A Comparative Grammar of the Dravidian or South Indian
Family of Languages (reprinted). Madras.
Emeneau, M. B., 1956. 'India as a Linguistic Area', Language 32:1-16.
Subrahmanyam, P. S. 1969a 'The Gender and Number categories in Dravidian'
Journal of the Annamalai University 26:79-100.
-1969b 'The Central Dravidain Languages' JAOS 89:739-50.
-1967-68 'The Personal Pronouns in Dravidian' BDCRI 28:202-17.

VERBAL LEARNING

C. H. K. MISRA

(NCERT, New Delhi)

Verbal Learning as a branch of psychology deals with the acquisition of verbal habits. Learning to spell, read, memorise a poem or acquire a foreign language vocabulary are all included in the general area covered by Verbal Learning. However, in order to have controlled experimentation, controllable verbal units in their laboratory studies. Under laboratory studies the present day psychology does not emphasise upon finding out general laws resulting in 'learning as a function of' type of conclusions stantial segment of variance in the Verbal Learning phenomena. These relationships bind together reliable measures of the learning phenomena with their relevant conditions. In this sense Verbal Learning concerns itself with a fairly narrow range of behaviour unlike the formal learning theories of Hull, Spence and Tolman, where the emphasis is on more pervasive concepts that enter into much wider range of human or animal behaviour.
Starting from the work of Ebbinghaus (1885) it has been realised that for simplification of the Verbal Learning situation single verbal units are better than prose passages and stanzas etc. From the findings of Ebbinghaus it is clear that some amount of regularity and predictability of verbal behaviour is possible. In fact, the famous 'curve of forgetting' reported by him (1913) was fitted into a logarithmic function. For uniformity and ease in presentation, Ebbinghaus had used a simple apparatus called the 'memory drum'. This is only a uniformly moving drum in which verbal units are attached and appear through an aperture.

Types of Verbal Units: It was apparent to even the early workers that one of the first things to be controlled under experimental conditions is the meaningfulness of the Verbal Units. The more meaningful a verbal unit is, the more memorable it becomes. Therefore, rather than words, phrases or sentences, digits or scrambled letters were preffered as verbal units. Ebbinghaus discovered what is known as a 'nonsense syllable'. The nonsense syllable is a consonant-vowel-consonant combination resulting in no meaning or words. These proved to be very useful for memory studies.

Types of presentation: Generally there are two types ofpresentation of verbal units. One kind of presentation is called a 'serial list' and the other is called 'paired-associates'. The data under the first presentation are usually reffered to as serial learning and the same in the second are called paried associate learning. We shall see that there are some subtle differences between the two kinds of learning. In the serial presentation verbal units presented one after the other within short intervals, say two seconds, and the learner is supposed to anticipate what is coming next. After some trials the subject comes to anticipats the items correctly. The number of correct anticipation grows from trial to trial. The standard number of items presented in the list is from 8 to 12, for both kinds of presentation. In the paired associates list the verbal units are presented by pairs. The pair may consist of nonsense syllables or nonsense syllables with digits or words with nonsense syllables and so forth. The first member of the pair is called a stimulus and the second member of the pair is reffered to as response. The learner is shown a verbal unit (the stimulus) first and then the pair stimulus and response together and then the stimulus alone and is asked to associate the response component with specific stimulus items. In paired associate learing confirmation is usually given after the response, that is to say, the subject is shown the correct pair again. The order of presentation is mixed from trial to trial to take a precaution against the growth of serial learning for the stimulus or the response component.
It has been found that the time required to learn a serial list is directly proportional to the length of the list ; as the correct responses increase, the errors decrease; this process is faster in the first few trials and slows down later on. There is another technical phenomena known as the 'serial position effect' in serial learning. Somehow, the greatest number of correct responses occur at the first part of the list and the next most frequent number of correct responses occur in the middle of the list, i.e., to say if we plot a graph of the number of correct responses along side the first, second, third etc., positions of items in the list we will get a bowed curve. This is known as a serial position curve. As to why this serial position curve develops, there are a number of explanations. Some of these explanations refer the what is expressed as 'retroactive inhibition'. It shows that each item generates some inhibition which gets accumulated in the middle. There have been attempts to define the 'functional stimulus' in serial learning. The 'Chanining hypothesis'for defining the functional stimulus for serial learning states that the functional stimulus is the proceeding item of the list i.e. to say, if we represent our verbal units of 7 items as a-b-c-d-e-f-g the learning proceeds at first as ab-bc-cd-etc., and then family is integrated into a continious chain. One alternative of the chaining hypothesis is 'cluster hypothesis'. According to it a cluster of two or more items rather than a single item serves as a functional stimulus. Many experiments have been carried out to examine these hypothesis. To explain the serial positions effect there are equal number of theoris. According to the Lepley-hull theory the inhibitions operate in a classical conditioning model. The conditioned stimulus for any item consists of the attribute of the very preceding item, (as in the chaining hypothesis) and traces of these attributes are found in all succeeding items in the list. In our list from item a to g, (a enters into association with b via a positive excitation and as a trace for all the others in the list.
In other words, for responses b to occur with the stimulus a, all others must undergo a delay or inhibition of delay. Thus for the association a to b, c, d, e, f, and g must undergo the inhibition of delay until their appropriate position is reached. Therefore, for a-b there are 5 inhibitions. Similarly calculated Items b to c have inhibition since these are said to be independently conditioned. There have been refutations of this theory and other alternatives have been advanced (McGrary and Hunter, 1953, Braun and Heymann 1958) by the theory of response integration and forward and backward learning occurring at diferential rates (Jensen, 1962; Ribback and Underwood, 1950). Under this theory the first item serves as an anchoring point for responses integration of the items in both forward and backward directions. Under our symbols, a is learned first, then it is used to integrate in the forward direction a-b then in the backward direction g-a then forward direction ab-c then in the backward direction f-ga and so on.

Meaningfulness: As has been pointed out earlier, meaninfulness affects the acquisition of verbal units. Experimentally meninfulness is defined in two ways. In one at a time, for a limited interval (say 2 to 4 seconds). They report their first associations i.e. to say the first meaning that might occur to them from the verbal unit. The percentage of subjects getting an association will be a measure of the meaningfulness of the verbal unit. In a second set of experimental control the subject is given verbal unit. This measure is known as 'association value' of the verbal unit. In a second set of experimental control the subject is given a verbal unit. In a second set of experimental control the subject is given a verbal unit for a given interval (e.g. 60 seconds). He than writes down all the associations occurring to him. When administered in a group the mean number of associations evoked by the verbal unit will be the association value of corresponding verbal units.

Let us see how meaningfulness affects verbal acquisition. Some general findings from exact experiments are listed here. For verbal learning one of the most important measures is rate of learning. This is measured in terms of time taken for acquisition as well as number of trials required for acquisition. The curve exemplifying the relationship between meaninfulness and rate of learning is found to be S shaped, that is to say if we construct lists consisting of verbal units increasing in meaningfulness and rate of learning is found to be S shaped, that is to say if we construct lists consisting of verbal units increasing in meaningfulness is about three times, e.g., if a list is constructed of units of a very low meaningfulness like RXQ, another of very high meaningfulness like BEL, the later will be learnt three times as rapidly as the former. Another interesting finding in terms of meaningfulness is from paired associated presentation. It will be recalled that in the passed associates there are stimulus components and response components. It is possible to vary the meaningfulness of stimuli and responses independently in different experimental conditions. When the meaningfulness is varied within a specific range for both stimuli and responses rather of learning is more affected by change in meaningfulness in responses rather than the stimuli. Such findings and others like them point out a significant fact about verbal learning. Verbal learning occurs in at least two stages. During the first stage the subject tries to learn the responses. During the second stage he associates these responses to the corresponding stimuli. The association value of the stimuli therefore, should primarily influence the second stage while association value of the response should affect both the stages. Just how the differences in meaningfulness produce differences in the rate of learning is however, a matter of debate. Although meaningfulness is a subjective phenomenon, it can be measured experimentally through controlling 'experience'. Experience can be defined as familiarity through frequency of occurrence. For instance, we can say that the higher the association value of a verbal unit the greater is the frequency of occurrence of a verbal as parts of words. Further, if under laboratory conditions some previous experiences are provided almost the same effects are obtained. The greater the frequency the easier the subsequent learning of verbal units. This position obtains indirect support for all sets whch use sentences having varying degrees of approximation to the structure of a given language. It has been seen that closer the approximation to any actual structure the easier they are for learning.
Intra-list similarity: Another general finding about rate of learnig is with reference to intra-list similarity. It is seen that higher the intralist similarity. The slower will be the acquisition. Experimentally similarity is varied in nonsense syllables and consonants by non-systamatic duplication of consonants. The fewer the number of letters used in the list, the greater will be the similarity. In other words, differences in the similarities of meanings are used for these studies. The common sense explanation for such a phenomena is that there is confusion in the mind of the subject as to 'what goes with what', thus if in a paired associate list we put the word 'unclean' in one and attach the word 'dirty' as its response, then the subject cannot use the difference in meaning as a cue for differentiation. Technically speaking, the inverse relationship between intra-listsimilarity and rate of learning results from interference produced by generalization among stimli or generalisation among responses. This is similar to stimulus generalization and respose generalization in conditioned reflex learning. In paried associate studies it has been seen that variation of intra-list similarity among responses produced lesser change in the rate of learning than corresponding variation of stimuli. This is probably because by such a process the response-recall stage reffered to earlier is facilitated even through it has a possibility to retard association with specific stimuli. As will be obvious the effect of intra-list similarity increase as the meaningfulness decreases. The difficulty will intra-list similarity decreases if the grouping and arrangements are done in such a manner hat units are presented with similar items. For example, if we have a 12 item list and arrange them in blocks of four having similar meanings, its learning will be more rapid than random presentation of the same 12 item list.

Affectively: This is another phenomenon which is extensively stated in verbal learning. Originally researchers were interested to test Freud's repression theory. If certain words are more unpleasant and certain words are more pleasant then the latter shuld have a better possibility for learning. Accordingly some words are first rated by the judges. Words like vomit, death etc., are unpleasant and words like love, mother etc., are pleasant. From comparative analysis however, no consistent evidence exists that lists varying in affectively can produce different rates of learning.

Transfer: In verbal learning three major phenomena, namely acquisition, retention and transfer are studied. Transfer is defined as usability of one learning in another situation. A transfer is said to be positive if an earlier learning experience facilitates the later learning and negative if vice versa. If a subject is made to learn one list a day for several days, his performance on the last day will be markedly superior to that on the first day. The improvement may be as much as double the rate after 8 days. This is a phenomenon which is called by Postman as 'learning to learn'. Effects of this nature are greater in serial rather than in paired associate learning. It has been found that higher the degree of the first task learning the greater is the positive transfer. Maximum positive is observed on the first few trials of the second list. Similarly between two responses, one in each list, has very little effect; unless high similarity exists between the stimuli paired with the same response we do not find any appreciable transfer. Similarly between stimuli in two paired associate lists cannot produce transfer. It stimuli similarity is high and response similarity is low this will produce negative transfer.

Mass versus distributed practice: Mass practice is defined as a practice having a few seconds between trails, e.g. (2 seconds) Distributed practice involves longer inter-trial gaps (say 30 seconds). Distributed practice has been recommended as the most economical for any acquisition but recently it has been found that distributed practice will enhance serial or paired associate learning under two conditions. (a) presentation rate of each item within a trial must be fairly rapid (say two to four seconds) and (2) interferences must be relatively heavy as in intra- or inter-list similarities. Distributed practice obviously takes more time. If time is kept constant, no positive effect is seen. However, distributed practice is advantages for a longer list.

Some basic theoretical problems: What conditions must be fulfilled before verbal associations can be formed? From 1930 s onwards there have been many investigations into factors affecting rate of association formation. More recently there has been a concern with a memory repository. Memory traces do exist as physical entities in the cerebral cortex. The main concern of researchers has been to establish whether verbal learning is determined and whether structural changes in the neurons is involved. The central nervous system must have the memory traces in some way. Through brain operations, by exciting the cerebral cortex can report all his experiences fairly accurately, yet we know precious little about the nature of the memory traces.However, there are to be the memory traces. Firstly, the verbal units having some form of similarity and strong association with each other ought to be grouped together. Secondly, within a given grouping the most available response should be that which has been most frequently experienced. Through these explanations we are able to understand certain facts like the role of meaningfulness in the response learning phase, but how two units from different groupings become associated remains unsolved if we take only the neuro-physiological exaplanations also beg the question regarding the manner in which the mediators themselves enter into verbal association. There are two major approaches to all learning phenomena. One is called the neobehaviorist approach explaning all learning through either neuro-physiological connections or through reinforcements. The other explanation is called the cognitive approach which tries to explain learning through perceptual patterning and insightful solutions. A number of studies wanting to verufy the appropriateness of either approach to verbal learning have been conducted.
Researchs in verbal learning are presently shooting out phenomena and theory which are touching, sometimes in a very fundamental way, all the areas of human learning, from simple conditioning to the study of thought processes (Underwood 1964). The results of investigations in verbal learning are affecting other areas of psychology like clinical psychology. For example, it has been found that greater negative transfer occurs in schizophrenics (Kausler, Lair and Matsumoto 1964) in certain transfer paradigms. Similarly, we can find significant contribution of verbal learning inverstigations to a attitude learning, motor learning and psycholinguistics etc. Thus verbal learning calls for more attention from all researchers connected with human behaviour.

References

Braun, H. W. and Heymann, S. P. 'Meaningfulness of material, distribution of practice
and serial-position curves' F.Exp. Psychol., 1958, 56, 146-150.

Ebbinghaus, H. uber das gadachtnis: untersuchurgen Zur experimnetallen Psychologic,
Dunker and Humbolt, 1885.

Tansen, A. R. Transfer between paired associate and serial learning J. Verb Learn. Verb
Behav., 1962, 1, 269-280.

Kausler, D. H., Lair, C. V. and Matsumoto, R., Interference Transfer Paradigms and the
Performance of Schizophrenics and Controls', Journal of Abnormal Social Psychology 1964, 69, 548-87.

McGray, J. W. and Hunter, W. S. serial position curven in verbal learning Science
(1953), 117, 131-134.

Ribback, A. and Underwood, B. J. An Empirical explanation of the Skewness of the
bowed serial position curve J. exp. Psychol 1950, 40, 329-335.

Underwood, B. J. The representativeness of role verbal learning. In C. N. Cofer (Ed)
Verbal learning and verbal Behaviour, New York McGraw Hill, 1964.


A SEARCH FOR OANCHRONIC FEATURES OF
INDO-ARYAN NASALS

R. N. SRIVASTAVA

(University of Delhi, Dehli)

0.0. The present paper is an attempt to establish some panchronic invariants in respect to nasals and nasalization process from within the parochial characteristics which different dialects of Indo-Aryan (IA) display. Although the observations are confirmed only to certain aspects of nasals and nasalisation processes and the data is drawn only from the IA languges, an attempt hs also been made to formulate higher generalizations so that from the uniformities of universal scope, idiosyncracies of languages may be drawn.

0.1. Old Indian grammarians have set three distinct categories in respect to nasal and nasalisation process-(a) nasikya-a cover term for nasal mutes (b) anusvara- a cover term for preconsonantal homorganic nasals and (c) anunasika- a cover term for nasalized vowels. As there is a fair amount of discord as to the use of the terms anusvara/anunasika in the earlier writings of Indian grammarians, it is advisable to define first the scope and meaning of these as ground-work for further discussion here.

0.2. Nasikya has been accepted here as that nasal segment which when represented on the level of dictionary representation, shows some attributes related to the point of articulation. Furthermore, MS- rules which map dictionary representation of nasikya onto the systematic phonemic representations are not conditioned by the syntagmatic relations of the segment concerned.
On the other hand, anusvara and anunasika, as a group, can be opposed to nasikya in two respects; firstly, they are 'conditional' segments and can be realized only on the basis of their syntahmatic relations and secondly, on the level of dictionary representation, they differ from nasikya in their feature complex by not showing any attribute related to the point of articulation.
Anusvara has been accepted here as a cover term for those nasal segments which though on the dictionary level of representation is a nasal archisegment but is actualized in the form of homorganic nasal. Anunasika, on the other hand, is the cover term given to those nasal segments which like anusvara appears as a nasal archisegment on the level of dictionary representation but is realized as a nasalized vowel.*

*It is interesting to note that ancient Indian grammarians have observed that the same basic linguistic unit (i.e. nasal archisegment) underlines both the cases of realizations-anusvasika. This has been more clearly

0.3. In order to understand the real nature of nasals and nasalization process and have a clear and non-ambiguous discussion on this subject, it is important to let one term stand for one particular unit and further, the unit be defined by the place it occupies on the defined level of representation.
The real limitation in the treatment of nasal segments by Indian grammarians lies in the fact that the same term has been left to stand for the units of different level of representations. For example, Atharvaveda Pratišakhya uses the term anunasikya which stands for both-nasal archisegment, as well as, its actualized variant as a nasalized vowel. Taittariya school uses the term anusvara ambiguously to designate nasal archisegment, and its realized variant as a homorganic nasal. Similarly, in R?gveda Pratišakhya the term nasikya stands for nasals, nasal archisegment and nasal vowels.
In this paper, terms like nasikya, anusvara, anunasika etc. will be used as defined in section 0.2. We all employ the sing N to mean any nasal sound whereas N* will stand exclusively for nasal archisegment.

A. Nasikya (nasal mutes)

1.0. (a) Every IA language hs atleast two nasal mutes in its inventory.
(b) The two nasal mutes are invariably bilabial m and dental n.
This attests the fact that the existence of non-anterior nasal sounds presupposes the existence of anterior sounds. In other words, non-anterior sounds like ? etc. require solidarity with anterior sounds like m and n. This solidarity is not reversible.

1.1. (a) Frequency of m and n is always greater than other nasal sounds. The
distribution of these two nasal sounds is less restricted than non-anterior nasal mutes.
(b) Amongst the bilabial and dental nasal mutes, the frequency of dental nasal is
higher than the bilabial one.

1.2. Nasal mutes presuppose the presence of corresponding oral obstruents.

1.3. Phonologically, nasal mutes are always voiced.

1.4. (a) Phonologically, there appear to be only three sub-types of nasal mutes
pure nasals, aspirated nasals and palatalized nasals.

brought into notice by Varma (1961:148)-'In both cases it is the m that has led to a particular change; in both cases no original nasal vowel has been acknowledged. It is a 'conditional' sound, appearing only under certain conditions, or, as the Carayaniya Siksa would have it, Anusvara is a dependent sound, which can manifest itself only on the basis of another sound. In the same way Kaccayana, in his Pali Grammar, terms the Anusvara as Niggahita or arrested m. Whether the m is arrested dropped, or changed, it is essentially the same phenomenon, termed as Anusvara by Pan?ini, Niggahita by kaccayan, and Anunasika by the Atharvaveda Pratišakhya.

(b) The form of aspirated and palatalized nasals is invariably the outcome of historical development of N+(c)h or N+S (where S=any sibilant) and N+i/y respectively. (Cases of borrowing is excluded here).

B. Nasal Archisegment (N*)

2.0 In every languge of IA family, the dictionary representation of lexicon attests the presence of N*.

2.1 N* may be realized in a language as anusvara or anunasika or as both.
When N* is realized as both- anusvara and anunasika, they create stylistic indices within the monolithic aspect of a language or they reflect the diatopic differences on the spatial dimension of the language usage. (Srivastava: 1969).
Taking 2.0 and 2.1 into consideration, one may conclude that the distinction within nasals for IA languages is invariably bifold on the level of dictionary representation (i.e. nasikya vs. nasal archisegment) while on the level of systematic phonemic represention it may be two or three fold. Schematically, it may be represented as below:

fig


2.2 The occurrence of N* is confined to the position V-(CV) i.e., they occur in a post-vocalic position followed by a consonant or a morpheme boundary.
All those instances of contrasts which have prompted linguist's like Pandit (1957), Cardona (1965) Kelkar (1968) etc. to hold the view that N* does not exclusively occur in V-C position, are infact based on the evidence of minimal pairs which are phonetic (rather than phonemic) in nature.
It has been shown elsewhere (Srivastava:1969( that all those instances which contradict the exclusive occurrence of N* in V-C position, can, on the basis of the underlying representation of lexical items, be divided into two groups: (a) the phonetically realized nasal is not a nasal in phonological representation and the realization of this segment is conditioned by a Sanskrit sandhi rule which may be represented as in Rule 1.
P-Rule 1.

+cons
-voc ? [+nas] /-+[+nas]

and (b) in underlying representation, a lax vowel exsits between the nasal and the following consonant i.e. the V-V rather than the V-C condition is attested. The mapping of V-VC structure unto V-C is governed by the lax vowel deletion rule (P-Rule 2).

V
P-Rule 2. -tns ? Ø / (C) VC - (C V ) ?
+low +tns
It should be pointed out that in those languages where phonetically minimal pairs create asymmetrical distribution of nasals, there exist P-Rules 1 and 2. For example, phonological representation of the word meaning 'gleam/brigtness' in Hindi (H), Punjab (P), Gujarat (G) and Marathi (M) is the same/camaka/but the phonetic realisations of this lexicon is different in transitive (Tr) and intransitive (It) forms.

Lexicon Verb Verb
Tr It

H. [camkana] [camkana]
/camaka/ P. [camkaun?a] [camkaun?a]
[camak] G. [camkavu)] [camkavu)]
M. [camkavin??] [camkavin??]
-mak- -mk- -mak-

It is interesting to observe that the lax vowel deletion rule is not operative in Oriya(O) and hence, we do not get -mk- cluster in the transitive form of te above-mentioned verb. (But O offers another type of complexity which is discussed in section 3.2c)
O. /camaka/ [camaka] [camakaiba]Tr [camakiba]It
It is apparent that phonetically realized minimal pairs like (sanki)- (sa?ka)H, (janki)-(va?ki)G or (d?an?ka)-(da?ka)M cannot be said to form minimal pairs creating asymmetrical distribution of nasals.

C. Anusvara
3.0 the phonetically realized variant of N* in the position V- ? is invariably bilabial nasal [m].
3.1.(a) The anusvara realization rule of N* in the environment of V-

+obstruent
-cont is the same in all languages of IA family. The rule may be formulated in

form of P-Rule 3.
P-Rule 3.
+obstruent + obstruent
-continuant -continuant
[+nas] ? a high v- a high
ß coronal ß coronal
d distributed d distributed

(b) It is only in the environment of
a vocalic +obstruent
V- a consonantal or V- -continuant that the


anusvara realization rule varies from language to language or even style to style within the same language. In other words, P-Rule 3 is a panchronic rule for IA languages but the anusvara rule may be different when a N* occurs before semivowels and liquids or sibilants and back fricatives.
For example, N* may be realised as a fully assimilated nasal segment before semivowel and liquids (as in Sanskrit) or as a nasal segment restricted in assimilation to the features related to the point of articulation while in respect to the major class features they form a feature

complex -vocalic [as in Hindi]
+consonantal

(c) Most of the IA languages has anusvara realization rule which maps N* unto systematic phonetic representation as a homorganic nasal even before sibilants and back fricatives. But some of the new IA languages offer fragmentary and inconsistent evidence as N* is realised as ? irrespective of the place of articulation of the following consonant. Some of the lexical items of Oriya and Bengali defy the homorganicness of the nasal preceding before a segment which is +obstruent (Pattanayak: 1966, 44-5).
-continuant

Thus, the lexical item for 'meat' in Oriya Assamese, Bengali and Hindi has -VN* s-in dictionary representation, but is realized as
[V?s-]O/B, [v#?x-]A and [-v#S-]H
It is interesting to note that the vocalic variant of N* mainly occurs in Sanskrit before sibilants and h. (Williams: 1864, 6; Emeneau: 1946, Allen: 1953, 40; Chatterji: 1960, 70). Atharvaveda Pratišakhya recognizes this nasal vowel in this context. This process may be formalized as in P-Rule 4.

-syllabic
P-Rule 4 V ? [nas] / N* aconsonant
+continuant
-avoice

Contrary to this, phonetic treatises of Taittiriya School, the Vaidikabharan?a, the Sarvasammat šiks?a, and the Yajus?an?a, hold the view that it is not the preceeding vowel, which is naslized and later N* is deleted but N* itself is actualized as a velar consonant i.e. P-Rule 4' is operative.

P-Rule 4'

-syllabic -syllabic
+back aconsonant
N* ? +high V - + continuant
- continuant -avoice

P-Rule 4 and 4' may be understood as a case of dialect variation Oriya, in fact has the P-Rule 4' rather than the common anunasika (vowel nasalization rule).
3.2 The nasal realization rule for anunasika (as stated in 3.0 and 3.1) operates as a MS-rule when applied within the morpheme structures and the same rule functions as a P-Rule when given to operate across morpheme boundaries.

D. Anunasika

4.0 The nasal archisegment N* in the structure -VN*C- may be realized as -*C-
The observation of Ferguson (1963:59) that nasal vowels, 'apart from borrowing and analogical formations, always result from loss of a PNC' (primary nasal consonant), may be modified for IA languages taking into consideration the fact that it is -VN*C rather -VNC- which gives rise to the nasalized vowels. The following table attest our view.

OldIA -VC1C2- -VCN- -VN1N2- -VN*C-
sapta karma -janma danta
? ? ? ?
MiddleIA -VC2C2- -VNN- -VN2N2- -VN*C-
satta kamma jamma danta
? ? ? ?
NewIA1 -v#C- -v#N- -v#N- -v#C-
Satta kama jama dat

4.1 In case, anusvara and anunasika both are phonemic structures in a given language, only one can be marked with [+native].
(a) Indo-Aryan languages can be divided into three sub-groups; -IA1 which has anunasika as a phonemic structure which is native, IA2 which has anusvara as a phonemic structure which is native, and IA2 where anusvara and anunasika as structures are in complementary distribution.
(b) It is important to note that IA languages display a significantly rich case of mutual borrowing due to social mobility, cultural fusion and verbal interaction within and across different dialects. Different realisations of lexical items of New IA having N* in underlying representations can be better explained if the items are divided into three categories.

+ Skt. -Skt.
-native [+native] -native
A B C

For example, in language of New IA, group (H, G. M.B. etc) the systematic phonetic representation of lexicon falling under category B, does not show any case of anusvara. Words falling under category A and C may show occurrences of anusvara but in no case, display an instance of anunasika. For New IA2 group (Punjabi and Lahanda may be taken as represented of this group), anusvara is a structural features for B and A while instances of anunasika is strictly restricted to the category C.
Thus, anusvasika, as structural features stand in different relationship in respect to the diacritic features [native] for New IA1 and

IA2 sub-group; for former, it is -aanusvara aanunasika while
anative a native


for the latter, it is aanusvara -aanunasika
a native anative

For example, compare the different actualization of words in New IA1 (Hindi) and New IA2 (Punjabi and Lahanda) having identical phonological representations (with a diacritic feature [+native]).
/daNta/ 'tooth' ? [da?t]H , [dand]P
/kaNt?a/ 'thorn' ? [ ka?t?a]H , [kan?d?a]P
At the same time, Hindi has [bha?g] and [dant] forms and Lahanda has in use words like [khE)da] 'eating', [cE)da] 'rising'. [ra?gla] coloured. These words will have to be marked [-native] in the lexicon.
(c) The above discussion leads us to posit nasal archisegment N* in the underlying representation of lexical items of IA languages (irrespective of the subgrouping and subclassification) whenever a nasal is immediately preceded by a vowel and followed by a consonant. This makes the underlying representation of most of the corresponding words similar. Thus, word for 'thorn', though differs in systematic phonetic representations in different New IA languages-[kan?d?a]P [kan?d?a]L [ ka?t?a]H,
[ka?t?a]B, [kan???t?a]O, [ka?t?a]G, [ka?t?a]M, in systematic phonemic representation they have the same-VN* C-structure.

References

Allen, W. S. 1953. Phonetics in ancient India. London: Oxford University Press.

Cardona, G. 1965. A Gujarati reference grammar. Philadelphia : University of
Pennsylvania Press.

Chatterji, S. K. 1960. The pronunciation of Sanskrit. IL. 21. 61-82.

Emeneau, M. B. 1946. Nasal phonemes of Sanskrit. Language 22. 86-93.

Ferguson, Ch. A. 1963. Assumption about nasals ; A sample study in phonological
universals, in Universals of Language, ed. By J. H. Greenberg, Cambridge, Mass.

Kelkar, A. R.1968. Studies in Hindi and Urdu, I. Poona: Deccan College.

Pandit, P. B. 1957, Nasalisation, aspiration, and murmur in Gujarati. IL 17. 165-72.

Pattanayak, D. P. 1966. A controlled historical reconstruction of Oriya, Assamese,
Bengali and Hindi, The Hague, Mouton.

Srivastava, R. N. 1969. Review of Studies in Hindi-Urdu, by Kelkar, Langauge 45. 913-
927.

Varma, S. 1961. Critical studies in the phonetic observations of Indian grammarians.
(Indian Ed.) Delhi.

Williams, M. 1864. A practical grammar of the Sanskrit Language. Oxford : Clarendon
Press.


COMPUTATIONAL LINSUISTICS AND SPEECH SYNTHESIS*

N. RAMASUBRAMANIAN

(Computer Group, Tata Institute of Fundamental Research, Bombay)

1. Introduction

One of the most powerful tools of research in the 20th Century is perhaps the digital computer. The computer has entered almost in all fields of research. Because of its versatility and extensive usefulness for solving problems using appropriately written programmes and because of its high speed of execution of programme steps, the computer has been used more and more by researchers. The usefulness of the digital computer in linguistic research is perhaps less known in India than abroad. Lamb [1], kucera [2, 3] and others have provided a description of 'The Digital Computer as an Aid in Linguistics'. We have also dealt with this subject elsewhere [4].
For CDC 3600 programming letters A to Z, numerals 0 to 9, special symbols like+-=/$*.,( ) are available to a user and using these symbols he writes his problem solving steps (programme) in a higher level language like FORTRAN (abbreviation for Formula Translation). These instructions are converted by the computer into machine language programme (digital representation) automatically and executed. The programme is fed to the computer through normally what is called a cardreader, therefore, the programme is in 80-column cards.
It is enough for a linguist to know how to represent a linguistic problem in FORTRAN or any other suitable programming language rather than learning about the working of a computer. The problem solving procedure given in a computer-handled language is known as a programme. In general except for statiscal studies involved in linguistic research, normally, the problems are of logical type. For example, let us assume that one wants to list all words containing a sequence/mb/in English. Assuming that one searches a text or dictionary, what we normally do is to take the first word of the text/dictionary until we have exhausted all the words. This is very time-consuming and tedious type of work, and one gets tired very soon. Suppose one wants to do this using a computer, one writes a programme which enables the computer to do this job very quickly, may be in a few minutes. In te programme, one asks several questions, such as, Is the first word encountered?

*This paper is based on a talk given by the author at the Summer School of Linguistics, Mysore, 1970.

If the answer is yes, is the first letter /m/?- if the answer is no, is the second letter /m/? if the answer is yes, Is the third letter /b/? if the answer is yes, we have found the sequence /mb/ in the first word, hence we ask this word to be printed out. We repeat this procedure to the end. What we note here is that we decide about the step by step procedure for the analysis and at each step we take a decision as to the course of action that is to be taken next. When this step by step procedure is written in a programming language without errors, the computer executes the programme efficiently, and fastly handling huge data. analysis of raw field data for phonemic, phonetic, syllabic structure and distribution, testing of rules of-grammar like generative grammars, dictionary preparation and alphabetization, various types of information retrieval such as search for prefixes, suffixes, given sequences of letters and so on are some of the areas where computer could be used extensively. Thus one finds a digital computer as a powerful tool in linguistics research. In computational linguistics one studies the various aspects of the linguistic problems, methodology and related matters. We shall see now what we mean by computational linguistics in the next section.

2. Computational Linguistics

The areas of studies under computational linguistics according to Kuno et al [5] Kuno [6], and Oettinger [7] are:

(a) Mathematical Characterization of natural languages,
(b) Developments of Computer programmes useful for linguistic research and
(c) The application of linguistic techniques to computer problems.

Survey of research in the entire area of computational linguistics is beyond the
Scope of this paper and may be found in Kuno [4], Oettinger [7, 8] and Bobrow et al. [9] We give below only a sketch of each of the above topics, through the computer programmes for linguistic research will be dealt with at some length.

2.1 Mathematical Characterization of Natural Languages

Under this topic, there are two types of studies possible depending upon the type of mathematics used therein (1) Statistical and (2) logic or discrete mathematics.

Frequency count of linguistic units such as phones, phonemes, syllables and words, statistics of style, authorship identification and quantitative studies of generic relationships between languages are some of the problems studied under 'statistical methods'.

Formal characterization of the phonology, syntax and semantics of natural languages come under the domain of logic or discrete mathematical studies.

2.2 Developments of Computer Programmes for Linguistics Research

A very important aspect of computational linguistics is the developments of computer programmes useful for linguistic research. While for the study of mathematical characterization of natural languages, a linguist should be a good mathematician also, development of computer programmes normally does not require knowledge of mathematics in the above sense. Hence it is worthwhile to go into some of the details of such an area, here.

The statistical studies of linguistic units, a concordance-an index which lists the occurrences of each key word in atext, often with its immediate context (environment), testing the rules of grammars such as transformational grammars as and when they are formulated, information retrieval such as listing all words having a given prefix, words ending in a given suffix and so on, listing words with a given sequence of letters from a text/dictionary and so on are best achieved efficiently by computer programmes.

The availability of graphical input-output system with major computer centres should enable the users for inputting ad outputting non-standard characters just as Devnagri or Tamil script materials directly to computer using appropriately written programmers. No such progamme is immediately available in India. A computer-controlled plotter generated Devnagri script programme at TIFR had been developed by Andres et al[10]. Elsewhere Chinese text analysis programmes have been developed and the details may be found in Walker [11].

Models of grammar used for description of natural languages are becoming more and more complex. Hence it becomes increasingly difficult for the linguist to test rules of grammar that he formulates. Bobrow el al [12] have described a system for phonological rule testing. A programme called TRANSFER developed at TIFR also users rules for the conversion of English Orthography into phonetic output suitable for speech synthesis. Rules are provided for the conversion of each letter in all environments and on the basis of the syllabic structure of each word, automatically stress rules are applied thus assigning stress to proper syllables and so on.

Information retrieval from a dictionary or a concordance is of great utility. For example, we have developed at TIFR a programme called WORDHUNT, which searches all the words with a given sequence of letters and prints them. While the programme for English Orthography conversion is in progress, several times we would like to test for how many words a given rule would apply. Further, are there any exceptions to the rule? For this purpose one requires a list of all words containing the given sequences of letters in all contexts. For example, a list of words containing the sequences /-nger/ suggest that in general /-nger/becomes/-njer/ in almost all the words, but exception are also found such as the words: finger, monger, anger and so on. Finding out the exception becomes easy once one has the exhaustive list of words available. In order to facilitate rapid search fastly, there is a programme called FONOLOGY developed at TIFR. This programme arranges a dictionary on file by file basis. In other words, all words containing say the letter 'a' will be repeated in more than one file, this arrangement seems to be economical from the computer memory allocation point of view. The above programme also gives you a frequency count of each letter with left and right context. Another programme called FREQUENCY has been developed at TIFR, which was used by some Bombay University students for their field-data analysis. In this programme one gets not only a complete phonetic distribution and frequency count, but also syllabic structure for each word and each syllabic type counted properly and listed in an increasing order and the alphabetization of the data as a dictionary. In less than 4 minutes nearly 2500 words of each student's data was completely analysed.
A similar programme for dictionary preparation is currently being written by Andres of the Summer Institute of Linguistics, Poona. A pamphlet issued by the SIL [13] may be useful to the linguist, wherein the availability of 'computer support of linguistic field work, basic concordance programme' of the university of Oklahoma computer laboratory is described.
Speech communication with computers calls for a closer co-operation between linguist and computer-based speech researchers. The problems of synthesis and recognition of speech are very complex, involving acoustic and linguistic rules. Formulation of rules for synthesis at phonetic/phonemic level is crucial for the synthesis of a given dialect. In recognition, the role of the systematic linguistic hierarchy, redundancy at each such level and related matters are less known. Hence there is a wide scope for research in this direction using computer aided tools. We shall give a rough sketch of speech synthesis programme in Section 3.

2.3 The application of linguistic techniques to computer problems

This area involves problems of automatic translation, automatic information retrieval and question-answering, production of computer generated abstracts, indexes, and catalogues, the design of languages for computers and so on, using linguistic techniques. Automatic information retrieval includes such matters as command and control, library automation, automatic processing of medical files, automatic medical diagnoses, filing and retrieval using natural or semi-natural language sentences and giving appropriate answers with respect to a data-base stored in the computer memory. Syntactic and semantic ambiguities of natural languages that pose major difficulties have to be tackled first, before such a system could be implemented. Thus the above area has a lot of open problems to be solved, having potentialities for further future developments.

3. Speech Synthesis

Work in the area of computer-aided speech synthesis has been carried on at TIFR for the last four years or so. Details regarding the speech-synthesizer and its simulation programmes are described in the various publications from TIFR[4, 14, 15, 16, 17]. Here we would like to give only details required by a linguist to understand this area of research.
While writing a descriptive grammar or a transformational grammar, or any grammar, a linguist assumes that he will be able to generate wellformed sentences of the language concerned and test it through informants. He assumes that he will be able to present the sentences in spoken forms -i.e., as speech. However, really speaking, no one has ever ventured to try this and the assumption remains open. One might argue that it would be physically impossible for a linguist to sit before an informant and speak all the generated material. Then what is the solution? If we assume that some mechanical device could be developed for the linguist which could be then made to speak the generated material, then the verification processes would be easy. Of course, no such device now exists which can speak perfectly like man in all respects. However, speech synthesis is the first step in that direction.
The second important reason to study about speech using speech synthesis is to understand more about the articulatory processes involved uin the production of speech. In linguistics, articulatory phonetics is the foundation for the study of language description. Accordingly, one classifies a sound as bilabial stop, dental stop and so on depending upon the point of articulation involved, or labels a sound as voiced or voiceless depending upon the vocal cords' mode of action-vibrating, or non-vibrating, or lables a vowel as front, central or back depending upon the active part of the tongue involved inside the oral cavity and so on. However, this type of definition has been accepted as a sine qua non. If one wants to know whether there is any significance in such a classification on some grounds other than convenience of description, until the speech synthesis schemes were implemented, there was no way of giving any affirmative reply. Speech synthesis studies by Stevens [18], Ramasubramanian et al [4] have shown that there is a close relationship between the point of articulation and manner of articualation of a speech sound and its acoustic correlates. It is possible to show now that for a class of speech sounds one can attribute a set of acoustic properties and so on. Thus the speech synthesis is contributing to the understanding of articulatory phonetics.
The next question is about the acoustic properties of speech sounds. In acoustic phonetics one tries to learn something about the acoustic nature sounds in isolation and in contexts, using spectrograms and so on. Until Haskin's laboratory researchers established the minimal cues of speech sounds [19, 20] using a pattern-playback device, it was not possible to know exactly anything about the acoustic cues of speech and verify the same. The hand-painted spectrograms used in the pattern playback was really the first step in synthesis. Thus most of the knowledge regarding the acoustic nature of speech and cues stem from the speech synthesis studies.
One of the most important questions in phonetics is the nature of auditory phonetics-the auditory effects of speech. This branch is perhaps not at all studied by a linguist. A major reason may be the non-availability of research tools and theories. However, with the advent of speech synthesis, this area has been investigated carefully. Accordingly, we now know some of the relevant acoustic parameters useful in the identification and discrimination of speech sounds. Several theories of perceptions had been put forward and being currently investigated using synthetic speech. Duration, fundamental frequency for voicing, vowel- consonant differences and so on are investigated at the auditory perceptual level and encouraging resuls are obtained.
Thus without going deep into the various aspects and usefulness of speech synthesis in linguistics research, one may feel that linguists have got really a very powerful tool-speech synthesis at their disposal. We shall briefly try to outline the synthesis scheme implemented on a computer.
It is well known that corresponding to human vocal organs of speech, electronic speech synthesizers could be constructed. Though the operation of a synthesizer might differ from that of vocal organ, so far as the end product is good speech, one need not bother about the mode of operation and construction of the synthesizer itself. Hence, let us agree here that articulatory mechanism may be translated into electronic circuit designs-i.e., a synthesizer may be constructed.
The next question is, how to activate the synthesizer just as one would like to activate the articulators to produce speech. A native speaker acquires his language behaviour and uses a set of muscles of the vocal organs to produce one sound and another set (which may include probably some of the muscles of the first set) of muscles for another sound and so on, but all these are co-ordinated in such a way that the sounds are produced continiously and not discretely as isolated sounds. Thus speech seems to be the end product of continious movements of articulators, at the articulatory level at least. The instructions to activate the right type of muscles may come from speech centre in the brian. Anallogically, it is quite conceivable that one can activate a synthesizer by a set of control rules and the rules are formulated in such a way that the synthesizer produces continuous speech. Secondly, when we transcribe (as linguists) the speech of some person and using phonetic transcription (or for that matter phonemic transcription) we do not record the quality of speech produced by the informant, his age, sex and other characteristics. What we simply do is to note these facts in the beginning and forget about it conveniently while proceeding with the transcription. Further, we represent the observed articulatory gestures in terms of phonemes. In other words, a set of articulatory features of a speech sound is represented by a single symbol, as though only one sound is produced/heard at a time. This is further refined at the phonemic level, based on phonemic principles. Therefore, if one says that he has a set of symbols that could be used in synthesis, for a given language (speech) and that those symbols are classified into various groups having certain acoustic properties, then as linguist we have to agree with him. This is quite easy too.
Thus, in speech synthesis, frist one defines all the symbols that may be used in the synthesis process (any other symbol undefined will be rejected by the synthesis programme). Secondly, these symbols are classified as groups-such as vowel vs. consonant. Vowels are further classified as front, central or back. Laterals, trill, continuants are regarded as glides-i.e., vowel like sounds, hence classified with vowels, Next, consonants-stops vs. fricative are taken up. Stop is classified further into bilabial, dental and so on. From Figure 1 (a), one can understand that the classification done in linguistics. Thus any symbol gets a set of attributes- for example, /p/ is called consonant, stop, bilabial, voiceless and non-nasal. Before we go into the details of the significance of such a scheme, let us try to understand some thing about vowels and consonants in terms of their acoustic behaviour.
Vowels, including glides, have what are known as formants. That is while producing these speech sounds, the oral cavity resonates at particular frequencies (or frequency regions), hence, the outcoming speech wave contains a set a resonance frequencies-formants. Normally, so far, researchers have shown that if one takes a spectrogram of a vowel produced by a speaker, the lower most three formants are sufficient to

fig


reproduce the vowel using synthesizers. Just as a linguist omits the emotional and personal characteristics, idiosyncracies, age and sex of the speaker from his transcriptions, in speech synthesis, only the first three formants of vowels and vowel like sounds are taken into account. At this stage, no one is able to show the significance of other formants and their fuctions in speech perception, convincingly. While we are talking about formants, it should also be noted that each formant has a certain average amplitude (that is, the height to which a single speech wave at a given frequency may rise) and duration for which the formant is sustained. Thus, each vowel (or vowel like sound) minimally has: (1) Three frequency regions (usually the center of the frequency region or formant is specified), (2) The amplitude (average) of each formant and (3) The duration of the second phonetic segment specified or measured from good spectrograms. In addition, since the centre frequency of each formant is normally specified, and since the formant is a region, each formant is given a width called band-width, which is assumed to be constant throughout the synthesis process.

Next we come to consonants. When nasal stops and fricatives are produced, the nasal and oral cavities produce not only resonance frequencies-formants, but also anti-resonance frequencies-anti-formants. These formants are also specified in terms of their centre frequencies, amplitudes, bandwidths and duration. But when the stops are analyzed, one notes the following:

(1) each stop has silence duration differing from another stop.
(2) each stop has a noise frequency called burst frequency (of course with amplitude, and bandwidth).
(3) the most important factor is that each stop influences the formants of the preceding or following vowels and the directions, and the extent to which the formant transition of the preceding/following vowel takes place determines the identification of the stop.

Thus the identification of vowels, nasals, and fricatives in isolation is somewhat
Possible depending upon their formants etc., the identification of stops are possible mainly on the basis of the transition properties of the formants of vowels. It becomes clear that the effect of the stop on the identification of vowels is also implied. Thus one notices in our synthesis scheme for example, that a class of vowels-say front vowels have certain formant transitions in the context of bilabial stop (which is different from that of dental, retroflex or velar stops). Simialarly, the transitions are different for central vowels and back vowels. In addition, even though a bilabial nasal stop may have a formant and antiformant that is different from that of dental nasal stop, the transition behaviour of say front vowels is same as that of non-nasal stop. In voiced sounds, there is a voice fromant called F0. If we remove this, we may produce voiceless vowels (or whisper) and voiceless stops and so on.

Thus we may summarise the above details as follows:

1. For each sound supply the acoustic detail in terms of duration, formants and

amplitude or formants and anti-formants in case of nasals, fricatives etc.) measured from spectrograms.

2. Specify the transition properties of vowels or vowel-like sounds at various
consonantal contexts depending upon the consonant class involved.

3. Find out from the rules how the transitions, the duration of the transition, the
direction and extent of transition for various sounds are to be handled.

4. Supply a set of rules for (3) above, as well as the classification of symbols.

5. When any string of symbols is typed through the computer typewriter that is to
synthesized (let us assume the symbols represent for the present phonetic
transcription), first take each symbol and find out the class to whichit belongs,
and then pick up its relevant acoustic parameters. Do this for all other typed
symbols. Next search rules for the concatenation procedure for the first two
symbols from left to right and apply the same generating transitions etc., (as in
rule 2, 3 and 4). Continue this process for all the symbols. Now a spectrogram
for the speech is obtained (generated).

These became control rules for the synthesizer which is activated and the output
of the synthesizer is playe through a loud-speaker attached to the computer.

Figure 2 (p. 130) shows how a spectrogram for the word 'INDIA' is generated
through speech synthesis. First, parameters for the symbol I, N, D, I and A are
found and placed one after another linearly with their respective durations. In
the second step (Fig. 2 bottom), the transitions, the duration of transitions etc.,
are computed through appropriate computer programmes and the spectrogram
for INDIA is generated, which is suitably played through a loudspeaker
attached to the computer, via the synthesizer simulated.

One may notice that in the synthesis programme given above, the synthesizer, the
spectrogram generation and the necessary computation involved therein are all handled by the computer through appropriately written computer programmes. Once the spectrograms are generated, they activate the synthesizer (of course simulated through the computer programme) suitability, whose output is played through the loudspeaker. The linguist supplies the rules for concatenation, classification of sounds and supply their acoustic parameters as external data, which could be modified from language without affecting the computer programme itself given above. It may be noted that there is provision for intonation, and stress, which operate normally on the fundamental frequency or F0 and its related properties. As an illustrative example, I give below some rules for the synthesis of say Tamil here [Fig.1 (b)].

Exaplanation of the rules given in Figure 1 (b) :

The rules are as follows (in general)
1. Duration (increase in duration to be effected for vowels in various contexts).

2. Steady-state (formant change of glides if any in various vowel contexts).

3. Hubs (terminal frequencies for various vowel formants in the context of
consonants).

* * * * * *
(I(10.280. 10. 2000. 8. 2700. 6.) (0.280. 10. 2000. 8. 2700. 6.))
(P(12.0) (0.0))
(B(12. 120. 7.) (0.0 120. 7.))
(S(FRIC(4250. 6750. 5250.) (15.0) (0.0))
.
..
.
.
RULES
DURATION 2
(.(VOWEL) (UNVOICED) 0.2)
(.(VOWEL) (VOICED) 0.35)
STEADY STATE 2
(.(VOWEL) (GLIDE L) 0.1 0.2 0.1)
(.(VOWEL) (GLIDE R) 0.1 0.2 0.1)
F1HUB 2
(.(FRIC) (VOWEL) 350.)
(.(STOP) (VOWEL) 200.)
F2HUB 8
(.(LABIAL) (VOWEL) 750.)
(.(DENTAL) (VOWEL) 1800.)
……………
……………
……………

fig


4. Transition time(duration for which transition of various formants of vowels/vowel-like sounds should take place).
5. Transition ratio (specifies the extent of transition of any formant towards the
terminal frequencies specified under rule 3).

Rule 1: (specifically given in Figure 1 (b)). Duration rule specifies that when a
vowel is concatenated with an unvoiced consonant, the duration of the vowel in question should be increased by a factor of 2 i.e., by 20 per cent and if the consonant is voiced, then the increase in question.

Rule 2: When any vowel is followed /preceded by a glide /1/ or /r/ then the first,
second and third formants of the glide are to be shifted upward
appropriately then the shift rule is applied.

Rule 3: (a) First formant of any vowel should terminate or tend to move towards
the frequency 350 Hz (or cycles per second) if a fricative is
preceding/following any given vowel and if the consonant is stop then the terminal frequency should be 200 Hz.

Rule 3: (b) When a bilabial consonant is followed by a vowel, the second formant
of the vowel should have a termination or tend to terminate at a frequency of 750 Hz. and so on.

Rule 3: (c) Specifies terminal frequency region for third formants of different
vowels. One may note that in these rules one can specify terminal
frequencies for each vowel or a class of vowels as desired. By terminal frequency we mean the frequency region towards which or from which a formant may be supposed to terminate or originate in ideal cases.

Rule 4: This rule specifies that when a stop is followed by a vowel, the transition
duration of vowel formants should be 3. cs. i.e. three centiseconds and for a vowel and fricative it is 5 centiseconds and so on.

Rule 5: This rule specifies the transition ratio for a vowel and stop and so on.
When a vowel is followed/preceded by a stop, and if the stop say is
Bilabial then the transition ratio .5 ensures that the vowel formants more
Towards the terminal frequency 200 Hz specified in rule (3a) but stops
exactly at the middle without fully reaching the 200 Hz. Thus when the
transition ratio is one, then the formant terminates exactly at the terminal
frequency specified and so on.

From the above rules one may notice that our classification has enabled us to formulate rules for a class of sounds rather than each and every sound. Though one can specify new sets of rules without disturbing the system itself and the rules are specific for each point of articulation involved.
We have tried to outline above the speech synthesis in a compact manner and many of the details omitted here may be found readily in the various references we have mentioned in this paper.
Conclusion

In this paper we have tried to describe the various aspects of computational linguistics and speech synthesis. This may be considered as an introduction to the fast growing field in modern linguistucs and speech synthesis. This may be considered as an introduction to the fast growing field in modern linguistics. We have not gone very deep into any one of the aspects but have tried to give an aoverall view of the field rather than details due to shortage of space.

Acknowledgement

My thanks are due to Shri R. B. Thosar, my colleague for his comments and useful discussions; and to Professor R. Narasimhan for his encouragement and interest in my work.

References

1. Lamb, S. M., The Digital Computer as an Aid in Linguistics, Language 37, pp. 382-
412 (1961).
2. Kucera, H., A Note on the Digital Computer in Linguistics, Language 38, pp. 279-282
(1962).
3. Kucera, H., Mechanical Phonemic Transcription and Phoneme Frequency Count of
Czech. Int. Jour. Of Slavic Linguistics and Poetics, VI, pp. 36-50 (1963).
4. Ramasubramanian, N., et al., On Computer Based Research Tools for Linguistics,
Technical Report 81, TIFR, Bombay (1968).
5. Kuno, S., el al., Computational Linguistics in a Ph. D. Computer Science Programme,
Mimeographed, Harvard University (1968).
6. Kuno, S., Computer Analysis of Natural Langauges, Proc. Of Symp, in Applied
Mathematics, American Mathematical Society Vol. 19, pp. 55-110 (1967).
7. Oettinger, A. G., Computational Linguistics, The American Mathematical Monthly,
Vol. 27:2, Part II (1965).
8.Oettinger, A. G., Automatic Processing of Natural and Formal Languages, Proc. Of
IFIP Congress 65, Spartan Book Inc. Washington (1965).
9. Bobrow, D. G., et al., Durvey of Automated Languages Processing, Annual Review of
Information Science and Technology Report 82, TIFR, Bombay (1970).
10. Andres, S., et al., A note on Programming a Character Generator for the Deva Nagari
Script, Technical Report 82, TIFR, Bombay (1970).
11. Walker, G. L., et al., Chinese Mathematical Text Analysis, IEEETransactions on
Engineering Writing and Speech (1968).
12. Bobrow, D. G., et al., A Phonological Rule Tester, Communication of the ACM
(1968).
13.Language Processing and Analysis by Computer, Summer Institute of Linguistics,
Poona (India Branch) (1968).
14.Rao, P. V. S. et al., Speech : A Software Tool for Speech Synthesis Experiments,
Tech. Report 38, TIFR, Bombay (1968).
15. Ramasubramanian, N., et al., Synthesis by Rule of Some Retroflex Speech Sounds,
Tech.Report 67, TIFR, Bombay (1969) (also to appear in Language and Speech,
London).
16. Thosar, R. B., A Method of Analysing Formant Transitions, Tech. Report 57, TIFR,
Bombay (1968).
17. Mamtri, M. V., et al., Preliminary Report on Speech Research, Tech. Report 21,
TIFR, Bombay (1967).
18. Stevens, K. N., Acoustic Correlates of Place of Articulation for Stop and Fricative
Consonants, QPR 89, M.I.T. (1969).
19. Delattre, P., et al., Acoustic Cues in Speech, First Report, Haskins Laboratory, New
York (1968: Original French version in Phonetica 2).
20. Delattre, P., et al., Acoustica Loci and Transitional Cues for Consonants, JASA 27
(1955).


ON THE PREPARATION OF PROGRAMMES FOR THE COLLEGE
LEARNER OF A FOREIGN LANGUAGE: A FEW PROBLEMS

M. L. TICKOO

(Central Institute of English, Hyderabad)

Discussions on the role of Programmed Instruction (PI) in foreign language learning may raise two types of questions: first, those concerning the theoretical problems; second, those that are predominantly practical1.

By theoretical problems I mean those that relate to the psychological foundations on which programming in the actual production and use of programmed materials. The first type includes problems that are both general and particular-those that belong to programming in general and the ones that have special relevance to PI as applied to foreign language teaching.

(A) Programming is an offshoot of the psychology of learning which, as an academic discipline, has of late received a great deal of scholarly attention and effort. For all that, however, it has not won the confidence of knowledgable scholars or practising teachers. There is a lot of dis-satisfaction with its performance and disbelief in its results. One extreme expression of this dissatisfaction is found, for instance, in I. A. Richards's thinking on psychology, especially educational psychology:

'Eductaional psychology is not what we want. That, too, is still a toddling infant
science and out ordinary tact and skill and common sense are far in advance of
the utmost reach of its present purview'2 .

Richards does also not consider educational psychology as something relevant to
even the teacher's calling, for in his view:

'they could learn whatever they from it that will be useful to them as teachers
much more easily without it, and their time is badly needed for other studies
and for other studies and for reflection'3.

1We are engaged in the preparation of programmed materials for the college learner of English from one part of the country. The preparation of these materials is presenting some problems. This paper is a rewrite of a lecture I gave about a few of these problems to the participants of the Summer School of Linguistics at the Central Institute of Indian Languages, Mysore in May 1970.

2 Richards, I. A.: Interpretation in Teaching, Routledge and Kegan Paul, 1949, p.
9.

2 Richards, I. A.: op. cit., p. 11.

Richard's view is certainly one-sided. It is not shared by most others who are equally interested in the subject and who have also made honest efforts to find out the relevance of learning theories to formal classroom teaching. It is also perhaps a little outdated: it belonged to the 1940's.

A most reasonable assessment of the theories of learning, including those aspects which are related to P.I., comes from an analysis of some of the main contributions made by them to human teaching and learning. This analysis leads to two main conclusions. First, that because most theories, particularly those on which PI is based, are derived from animal researches, they do not, in many cases, apply to human situations. Animals make good laboratory subjects; human beings do not. Animal behaviour in much more easily predictable; human behaviour is almost unpredictable4. Animals are not tied down to any institutions, whereas human beings in their civilized state depend a great deal on socio-culturals institutions of various kinds. Animals have no history and are not in the habit of hadling down traditions and conventions. They are bothered by neither their past nor their future. Generalisations based on the study of aniamals need not therefore be entirely true of human beings.

A second conclusion is perhaps a little more instructive. It also does a little more justice to learning psychology are essentially confirmations of ordinary experience; they bring together and sanctify ordinary knowledge. Put in different words, we say that a main task of the psychology of learning has been to systematise, and very often confirm, te hunches of practitioners engaged in this business of teaching and learning.

An example which is of direct relevance to the psychology that lies behind PI may illustrate this point. It refers to 'conditioning' as a psychological principle.

In the seventeenth century Spain there was a playwright-Lope de Vega-who wrote a play called EI Capellan de la Virgen (The Chaplain of the Virgin). In this play we are introduced to a character who was very often deprived of his food by some mischievous cats that belonged to the monatery. The man tried but could not chase the cats away. But in the end he found a way of coping with these animals. He put them in a sack and took them out under an arch on a pitch black night. There he repeatedly coughed and associated his coughing with what he calls 'whaling the daylight out of the cats'. This repeated association of coughing and punishing the cats resulted in a very happy situation for the man. Never after did the cats take away his food because as soon as any of them approached him, he would cough and make its flesh creep with the thought of the punishment that would follow.

4 A good example of the unpredictability of human behaviour are the recent British elections. The pollsters' unanimously predicted a Labour victory; the results belied the predictions almost completely.

What the character in Lope de Vega's play did is, in its most elementary form, what is known today as classical 'conditioning' in the psychological literature on the subject. The large number of experiments performed by Pavlov and his followers have, of course, added a great deal to both the concept of conditioning and its details. They have also made conditioning a successful instrument in the hands of numerous workers who are engaged in problems of behaviour, animal as well as human. But the basic idea in Lope de Vega and Pavlov is almost the same, and psychology has only systematized and confirmed what existed, even though in a crude form, before the birth of experimental psychology.

This generalization is true of many other principles and 'laws' in the psychology of learning. Thorndike's laws of learning-of 'readiness', 'exercise' and 'effect', as also the modifications and the 'sub-laws' that he introduced later, all mainly confirm and systematise the experiences of able teachers and understanding parents. So do numerous other findings that belong to the most successful efforts made in the psychology of learning.

A number of questions can be raised on the rationale of PI as well as on the type of psychology that supports it. Programmers, however, have an apt answer to many of these: they say that, for all its psychological failings, PI works. PI produces results. And this has been especially true in its application to subjects such as school mathematics and elementary sciences, wherein the introduction of programming has opened up new possibilities in terms of both speed and efficiency. Nor can we exclude foreign language teaching from the group of subjects which profit from the use of programming. Theoretically, therefore, the programmer, while losing all the battles, maganes to win the war.

(B) The more urgent and immediate problems belong to the actual design of materials. These practical problems are also of two kinds. The first kind includes decisions as essential preliminaries to the production of programmes. The second contains those which are experienced on the writer's desk. Let us briefly refer to these in that order.

(B. i) Our first problem is this. Programmed materials are usually auto-instructional. Such materials have distinctive advantages for individual learners of varying proficiency. But in most cases, they also produce fatigue and boredom much more quickly than instruction given with teachers' assistance. In several subjects it has been found that self-instructional lessons of more than 20 minutes' duration begin to operate on 'diminishing returns'. And despite Prof. Skinner's assertion that 'as a mere reinforcing mechanisium, the teacher is out of date', we continue to need this outdated creature to sustain the learner's interest in his work. More specifically, practicing programmers need to decide not so much whether the teacher is still useful, but exactly how best, when, and for how long is the teacher's intimate contact necessary to motivate the learner to go on learning with maximum efficiency.

In a situation like ours this decision about the roles of materials as against men is much more than academic interest. Each full 'unit' of the remedial programmes for the intermediate learner that we are now engaged in producing, takes an hour or more of continious learning effort. Unaided by human intercourse or encouragement, the learner, we find, does not give the entire unit his undivided attention5. Our answer does not lie in the teacher unaided by the programme because, as I have argued elsewhere6, the teacher of tomorrow in this field will most likely be neither adequately qualified nor suitably motivated to undertake the type of instruction provided in these programmers. But even if this were not true, we have to contend with the programmer's argument, largely supported by experience, that the teacher who succeeds in creating the 'living dialogue' which sustains interest and produces further learning, is no ordinary teacher. Such a teacher is rare in general and rarer still in foreign language teaching.

What is necessary therefore is not to assume too much of qualified human assistance but, at the same time, to utilise the presence of the ordinary classroom teacher where he can contribute most.

(B. ii) A second related problem is that of having to define the areas which lend themselves to programming without creating problems of administration and organization which are not easily solved in the ordinary circumstances of foreign language teaching in this country. Some areas appear to be much more programmeable than others, some present barriers which, at least now, appear difficult to remove even in happier classroom conditions.

The exact specification of these areas is a task which requiresthe support of long-term research and experience. But a tentative estimate of what is not conveniently possible is not difficult to make at this stage. Aspects of language whose acquisition requires original thinking or creative effort may for long defy our ordinary techniques of programming and those which depend wholly or largely on human interaction and teacher-student dialogue are equally difficult for any kind of atomised presentation. Also, younger learners, whose learning efforts are greatly facilitated or thwarted by the relations they are able to form with their mates or masters, are less likely to accept programmed manuals or machines as substitutes for men than more mature learners. Rewards and recongnition offered by human agencies are also not readily available in working with programmed materials. Nor, in their cases, can machine-produced reinforecement equal the satisfaction given by praise or competition.

For the practising programmer these two problems-(i) having to limit the scope of the programme to avoid boredom on the part of the

5 A series of college-based experimental try-outs are being planned to find out the extent to which these and similar fears are justified and to help us over-come the difficulties caused by the absence of the teacher.
6 See Tickoo, M. L. in the Bulletin of the CIE. No. 7, 1968-69, pp. 50-52.

learners and (ii) having to delimit the linguistic areas which lend themselves to productive programming, are among the most urgent and difficult.

A second set of practical problems belongs to a different plane in a programmer's work. To look at them, let us take it that he has selected a language area and is now engaged in the design of programmed materials, in our case a Pupil's textbook for a specific purpose. What are some of the main problems he is likely to face and what do they tell us about the present possibilities of programming in the specified areas?

To answer this we must briefly go over some of the preliminaries of programming.

A programme is something that is designed to provide the skills and knowledge that are necessary to lead specified individuals to clearly prescribed testable competence and performance in a field or subject through well-defined and tested 'steps'. It can also be defined as a sort of disciplined approach to instruction which begins in behavioural analysis, is charesterised by emphirically determined behavioural sequences and results in explicity described terminal performance. The main elements of an instructional programme as seen in both definitions are (i) specified individuals, (ii) clearly prescribed and fully-defined end-performance and (iii) 'steps' incorporating elements of skills or knowledge fully tested for their efficacy in producing the know end-products, and (iv) empirical testing at each stage.

These four essentials are by no means the only important ingredients of an actual tried-out and validated programme. There are others, including the learner's involvement at each step, the provision of adequate reinforcement and the use of different techniques and devices to heighten the instructional possibilities of the programme. We shall, however, concern ourselves with only these four in our brief review of the programme designers' problems.

(i) The first prerequisite of a good programme is the exact specification of the programme users. This includes details about the learners' provious background, their ability to do things, that are related to the programme their motivation, their aims in learning it. It should also mention the details of the learners' present competence and performance in different skills or difficult aspects of language acquisition. In short, the programmer should be able to say almost everything about the learners' past or present that has any relevance to their ability to profit form the materials and methods of the programme. In addition he ought, as far as possible, to be able to state not only what they know or seek to know but also to estimate their ability to do things at various stages in their use of the programme.

One formidable problem in language programming comes from this need to define our 'customer'. Evidently it is easy to specify his 'surface structure'-his age, the years or the exact hours he has been learning the language, his mother-tongue or any other tongues that he may have taken lessons in or found necessary for use, before he starts learning the language being taught and so on. These details, though extremely useful, in practice add up to only a small part of what is required in an exact specification of the learner. The really difficult problems lie in our attempt to measure his language acquirements and competencies, especially at the relatively advanced stages of learning.

To begin with we do not yet know what exactly is involved in the acquisition of a new language. Here we are left with opinion rather than with proved or ascertained facts. For some authorities language acquisition consists chiefly of skills which together mke up linguistic competence; for others skills and habits form only a part of what is called 'knowing a language'. Some even doubt if language acquisition can be explained at all in terms of skills and habits.

Here, for example, is one of Noam Chomsky's recent pronouncements on the subject: 'A good deal of the foreign language instruction that's going on now, is based on the assumption that language is a habit structure, that language is a system of skills and ought to be taught by a drill and by the formation of stimulus-response associations. I think the evidence is very convincing that that view of language structure is entirely erroneous, and that it's a very bad way-certainly an unprincipled way -to teach language. . . Our understanding of the nature of language seems to me to show quite convincingly that language is not a habit structure, but that it has a kind of a creative property and is based on abstract formal principles and operations of a complex kind'*.

Secondly, if we nevertheless continue to take the 'old-fashioned' view that language acquisition, except at a levelof competence which very nearly approximates that of the native speaker, does mainly rest in the four major skills of reading, writing, listening and speech, we have to seek the means by which we can measure each of these with a degree of certainty and find out the learner's proficiency in one or more of these. In the present state of our knowledge or ignorance this too appears to be the learner's performance, at least for some areas of language, does not, even in the best of circumstances, spell out his competence. But it is more so because our scales of measurement leave much room for improvement. What they show in these areas is not enough to justify the programmer's objective statement about what is known or not known to a group or class of learners.

A precise yet comprehensive statement of the entrance behaviour is therefore not altogether within our grasp at this stage. Specification of the learner appears to be a more difficult difficult task in foreign language programming than it is in subjects like mathematics or elementary-and intermediate-level natural sciences.

*'Noam Chomsky's view of language' by Alasdair MacIntyre in The Listener, Vol. 79, No. 2044, p. 690.

(ii) As one would except, our second essential-the definitation of the 'terminal behaviour'-presents the same difficulties as exist in the case of the specification of the entrance behaviour. But in this case there is an additional factor to complicate the situation. It is that a clear specification of the end-result must be made in terms of our statement of what the learner comes with at the beginning of the programme. So if, as is bound to happen in many cases in programme design, the statement of the entry behaviour suffers from want of fullness or scientific rigour, the statement of the terminal behaviour can hardly be any better.

(iii) We next come to 'stepping'. In the literature of PI a lot of argument centres round stepping and step size. Some doubt if small steps are suited to all types of learning and if atomizing is the best way of presenting the material. In language studies in particular meaning, which constutes the essence of communication, is without doubt the most neglected aspect in theoretic discussion. Introductionof stepping' may add to its neglect in practical pedagogy.

Other question the wisdom of attempts to weed out 'steps' which do not work with the vast majority of learners. Such attempts at over-simplification, though they are justified in terms of pass percentages and statistics, do also, in most cases, take away the greater half of the challenge that is vital to good teaching, and effective learning. Good teaching must include exercise which require genuine thought and action and which, when solved, bring joy to the learner. 'Steps' can also be critised on a different count. If learning or remembering involves 'an effort after meaning', teaching which reduces the need for effort should, to that extent, also reduce the possibilities of effective or efficient learning.

In spite of all these criticisms, however, the notion of 'stepping' does appeal to the foreign language teacher of today. It does so because it is interesting and yet not altogether new. Those of us who have had the experience of designing or using 'structural' courses for learners of English as a Foreign Language know all too well the main purpose in grading at the earlier levels: it is to 'stage' the material in such a way that the learner is neither discouraged and overawed by the introduction of too many items, nor starved for want of anything new.

Good grading mainly depends on the maintenance of a balance between too much and too little. But especially at the earlier stages, it also rests in organizing of the material in such a way that (a) everything that is new is closely linked with everything that precedes it and, (b) that the new and the established together not only assimilate a part of the linguistic description but also imply the description in full. This is also characteristic of 'stepping' at its best.

C/iv. But there is a difference of no mean proportions. Unlike structurally graded materials of one type or another, programmed materials have in them the built-in provision for testing at each stage. Every 'step' and every 'unit' of a programme because complete only after the learner has undergone a test which tells the programmer what has or has not been learnt and, simultabeously, provides the learner the much-needed reinforcement for further effort. Tests, which often differ from one type of programme to another, are at once the most essential and, in some ways, the most rewarding feature of PI.

For all their promise and performance, however, the two main types of tests-the Skinnerian 'constructed-response' test and the Crowderian 'multiple-choice test', as also their variants or possible combinations, leave a serious doubt unresolved. The doubt is this: does success at one or all of these 'tests' guarantee learning? Or, in other words, is there a visible and definable relationship between test responses and actual achievement?
The main reason for our doubt is this. Tests which form part of any language programme do not, in the vast majority of cases, require the learner to use the language item or items being taught; they only call for recognition or recall of what has been learnt in a 'frame'. Here is an example.

At the end of a programme 'frame' on the use of the definite article in English for the intermediate-level learner, tha tests given to assess the results may take one of two-forms. We may give the learner a blank-filling exercise, in this case, without necessarily specifying the position of the blanks. The learner may be told that form the passage given X definite articles are missing and be required to supply these definite articles at their appropriate places. Alternatively, if the learner has also learnt the use of the indefinite article (a, an) and the 'zero' article, he may be asked to fill the balnks using the articles or to make sentences by 'matching' or combining two or more sentence parts. In the latter case he will most probably choose from one of four possible slot fillers- 'the', 'a', 'an', and 'zero'.

Both these types, as also their many variants, if well prepared, should tell us with a measurable degree of accuracy whether and to what extent the instruction given through the programme unit or frame has gone home to the learner. They should tell us a great deal about the quality of the programme and its immediate gains and show the suitability or otherwise of the programme for the learners under instruction.

Add to all this the fact that in the literature on objective testing there is no experimental evidence for the growing belief that these tests fail to tell us anything about the learners' competence in the hanitual use of the language for everyday needs, and we ought to feel greatly satisfied with their performace. A combination of several reasons makes us doubt, however, if such tests in their present state of development (even though they are admirably suited to the narrowly defined purpose of a programme 'frame') can or do serve as efficient instruments towards the broader objective in foreign language teaching, which is to produce the ability to make use of the language in its written or spoken form. Three of these reasons require specific mention in the context of the programmes under consideration.

The first reason is this. At the intermediate stage (also known as the PUC or Junior College stage) one main concern of the teacher or programmer is to repair the damage done during the period of schooling that has preceded. Every programme therefore, whether it is avowedly remedial or not, has to give a lot of conscious attention to correcting the errors, and most tests, whether deliberately or otherwise, must concentrate on 'examining for error' much more than on 'examining for achievement'. The best part of each test is therefore a test of well-known problems in a particular area (viz. pronunciation, vocabulary or grammar and usage) of language use rather than of the use of language in effective communication. And with our growing though as yet imperfect, understanding of the errors caused by L1 and L2 contrasts, a tester is most likely to construct tests with these errors in view. Tests of this type naturally leave out a whole gamut of actual or potential problems in successful language use.

Secondly, in programming it is usual to concentrate on one skill at a time. This has its obvious advantages. Among other things, it facilitates 'frame'-construction and also makes an accurate measurement of results a relatively eay undertaking. But there is the other side of the coin which appears to have stayed hidden hitherto. It is that language skills do not usually operate by themselves. Each one forms part of a complex whole which, in its composition, is linked with other major skills as also with some extra-linguistic factors. Good tests in languages must take cognizance of what George Perren rightly terms as 'a vast, ill-defined, and shifting range of behaviours'. The art of testing in its present state of development does not provide for this.

Thirdly and lastly, there is the most important question of the 'transfer' from what is tested to what is tested to what is needed. This is, in part, based on what was said earlier about 'examining for error' as against 'examining for achievement'. It also calls into question the testers' belief that the selected parts represent the totality of foreign language behaviour.

Much more than these two, however, it is related to the fundamental question reffered to earlier. Our ignorance of what constitutes language acquisition coupled with our failure to give due weightage to many sizeable elements of what we tentatively take as representative skills of meaningful communication, have produced a situation in which, with the best of objective tests, we are still left with misgivings about the extent to which what is measured approximates to what is essential for language use. Does good performance in, for example, a test in oral comprehension, which utilizes both segmental and supra-segmental discrimination tests, mean that the learner will use 'language as speech' effectively for ordinary communication? Or does it leave out a lot that can tilt the balance in the opposite direction? If it is be the former, why then does one come across instaces of foreign speakers whose speech jars on the native and the non-native ears alike and, on occasion, fails to put across the intended meaning, in spite of their near perfect use of phonemes or phonemic contrasts? But if it be the latter, what must our tests do better or do differently than they do now so as to make the 'transfer' from the tested to the needed a much clearer possibility?

These then are some of the vexing issues that face us in our attempt to design result-producing, self-instructional materials for the intermediate- and relatively advanced-level learner of a foreign language. There are others too. Some, like our helplessness on finding that to ask student of the foreign language, is like asking for the moon, have to be accepted as being part of the pattern. Some, like the absence of usable and comprehensive contrastive linguistic studies especially in the areas of grammar and usage, are perhaps a temporary phenomenon. Many of the problems raised in the discussion above will also, we hope, solve themselves with experience or through experiment, through some, at least at present, appear much more intransigent.

For the committed programmer even the less viable aspects of programming may appear to present no problems. He already sees, as I heard one enthusiast say in his defence of programming earlier in the day, 'beaming faces full of purpose and full of promise'. For those of us, however, who are not so keen to jump on the bandwagon, there is all the reason in the world to keep an open mind to the possibilities of each new innovation in educational technology, but at the same time, to guard against anything which, in building a facile faith in some 'miraculous solutions', makes the language teachers' job even more difficult than it is today.

In what I have said this afternoon, I have only raised a few of our questions and misgivings in attempting to programmes for a specific need. I have said nothing about either the possibilities of programming or about the ways in which the less knottier of the problems can approach their resolution. For the former we can go to the works of many able exponents of PI, for the letter we must await the results of classroom-based try-outs and experiments. At the Central Institute of English we are now battling with the preliminaries of some such inside-the-classroom studies and experiments. Before long we should have at least a few tentative answers.