A Mind for Language

Feb 21, 2020

Funny story to help remember Greek 3rd declension endings

I wrote this a while back as a funny mnemonic to help remember Greek 3rd (or consonant) declension endings. Enjoy.


Once there was a man named Sydney (ς) who was very fond of Women and felt threatened by other men so that he had his poor dog Neutered so there was No Chance (-) of the dog getting in his way. Sydney and his dog were very fond of devices with Apple OS (-ος, -ος) such that his dog had his own iPhone. They would set together on their iPhones (-ι) watching ET (-ι). One day Sydney had an AHA (-α) moment and realized his life was going Nowhere (-).

So he went to Express (-ες) and bought some new clothes to make himself very Attractive (-α). Now that he Owned (-ων) new clothes he thought it was a Sin (-σιν) to go out in the old ones so he gave them to his neighbor who was an Attractive (-α) Astrologer (-ας).

Jan 17, 2020

Translators' work -- a poem

I've been reading Adorning the Dark by Andrew Peterson and it's got me thinking about poetry. I started reading some and now I've written some. I wrote it in Greek first and then wrote the English version. My profoundest apologies to Homer and the Greek greats for abusing their language :-).

ἔργον τῶν ἑμηνευόντων

σπέρματα μετατίθημι μεγάλα
ἀπ' ἀγρου εἰς ἄγραν.
φύτα αὑξήσουσιν.
ταχέως;
βραδέως;
οὑδέποτε ἴσως βλέψω;
βλέψουσιν ἴσως τὰ τέκνα σου;

Translators' work

I move seeds, big ones,
from one field to another.
Plants will grow.
Quickly?
Slowly?
Maybe I'll never see them?
Your kids, maybe they'll see?

Dec 20, 2019

Learning Greek, an interview with Seumas Macdonald

On 18/19 Nov 2019 (depending on your time zone) Dr. Seumas Macdonald and I had a very enjoyable conversation about Ancient and Koine Greek and how he got to his current level in the language.

Seumas is a Greek teacher and researcher who runs The Patrologist.com. He is working on a number of projects to create Greek teaching/learning materials that reflect what linguistics has taught us about how humans learn languages. He also teaches and tutors Greek online.

This post is a summary of the interview. It starts with Seumas's Greek journey and ends with some recommendations for learners. You can listen to our conversation here.

In the course of our conversation he repeated that it took him a lot longer than it needed to because of the methods used and that with proper Comprehensible Input (CI) based methods it shouldn't take a learner near as long.

Seumas's journey into Greek

It starts

Like many of us, Seumas started in traditional grammar-translation (GT) classes. He began with Mounce's Basics of Biblical Greek which he worked through on his own before entering a theological degree. In that program he was required to take four years of Greek. The first year was a reprisal of what was in Mounce's book; the second year moved on to Wallace's Greek Grammar Beyond the Basics; and the last two years were exegesis classes of the Greek text of the New Testament (GNT).

During this time he used electronic flashcards to learn all GNT vocab occurring 3-4 times or more. He also began studying Latin on the side and got additional Greek input from taking outside classes on works written in the Attic dialect.

Journey to CI

Seumas's journey to Comprehensible Input based teaching and learning started with Lingua Latin Per Se Illustrata (LLPSI). Through this book he realized that there are more effective other ways to learn languages than the GT method. He read posts in online Latin teaching forums and began considering how to apply it to Greek. There was less communication about CI methods and materials at that time, but he grabbed some of Randall Buth's materials from The Biblical Language Center.

While working in Mongolia, he had the experience of learning a modern language. His reading of the research in language acquisition helped him understand what to focus on.

So I did almost no homework in my classes. I didn't do any translation exercises. I didn't do any flashcards for Mongolian. I just tried to what I believed would be useful which would be getting exposed to Mongolian as much as possible.

These experiences helped shape his understanding of the effectiveness of CI and how to apply it. He also took some Greek classes online from sources such as Michael Halcombe's Conversational Koine Institute.

Last few years

Starting in 2015, Seumas began tutoring Greek and Latin and doing more conversational stuff online. He also attended some Salvi Rusicatios (Living Latin weeks/weekends). These helped him realize that he could hold and lead a conversation in Greek. He could keep instruction in Greek without the need for English. He's also been involved in some online Greek chats and various reading groups where they read and discuss a text in Greek.

Summary

Seumas has taken a variety of classes, some of which used the GT approach and some were more CI based. He notes that the experience of reading a lot in these classes helped grow his skills and that the communicative classes were more enjoyable and useful. Over time he's moved toward more CI based activities such as online chats, reading groups, and events that were held in the language.

How does CI shape what he does?

I asked Seumas how CI shapes what he does,

... I enjoy grammar, I enjoy linguistics, I enjoy understanding how language works, but knowing that and ... being pretty convinced that acquisition happens when there's input and that input is understandable kind of drags me back and says "What are you doing with your time and what are you doing with your students?" There's a place for grammar and a place for explanation, but the bulk of what I should be doing, both as an ongoing learner, but also as a teacher, is getting input for myself and creating input for others. That's what's going to drive acquisition.

Pushing farther

I also asked Seumas what he does to further his own Greek skills. He said that it's driven largely by the "vicissitudes" of life. He teaches whatever his students need to learn. One day he may be working through a classical text with his students or he may be prepping them for a traditional GT style final exam for a GNT class.

He also has regular online video chats which allow him to speak at a higher level that he normally does with his students. He also tries to read things that interest him. This includes easy stuff such as textbooks and Greek readers.

What he's excited about currently

Seumas is currently working on a project called Lingua Graeca Per Se Illustrata.

It's essentially ... an open ended writing project to create as much text that flattens out the curve for anyone reading Koine or Attic to read interesting stories that introduce grammar and vocabulary gradually and give meaningful, comprehensible repetition.

He views this as a "shared universe of Greek" that people can read and contribute to it. He is intentionally structing the materials and thinking out what readers need to be exposed to and in what order in terms of both vocab and grammar.

He's also working on a podcast, but notes that it's currently on "an unscheduled, long hiatus".

Recommendations

Seumas offers the following advice:

I wouldn't tell people to do things the way I did them because it took too long to get where I am and I think people can get there much faster if they're a bit smarter about what they spend their time on... I think the best use of time is to be reading, reading a lot, reading things that are relatively easy. If it's an intensive type of reading experience where you have to look up a lot of words, you're not getting as much exposure as you would by reading a lot of easy stuff.

He also recommends listening to stuff, but notes that sadly there isn't as much Greek audio materials as there are Latin at the moment.

How to move into more difficult texts

My main question for Seumas was about the transition from New Testament Greek to Classical or other Koine materials. From my perspective one of the main problems is the quantity of new vocabulary. He said texts outside the GNT tend to be written in a higher, literary register so "I think you try to flatten the gradient as much as possible" between what you're already comfortable with and the target texts. "...in one sense the vocab problem never really goes away," and he goes on to explain that Homeric Greek is a challenge for him because the vocab is different so he needs to look up a lot it when reading.

Flatten the gradient

To flatten this gradient Seumas recommends choosing texts "that will make that gap as small as possible."

So from the GNT one might proceed to the Apostolic Fathers. The Didache could be a good place to start because of its similarity to the GNT, but First and Second Clement might be more difficult.

Seumas said,

I tell people to cheat as much as they need to; that is it's always find to be looking things up or using whatever resources you can to make things as intelligible as quickly as possible.

Greek readers can help here (check out Steadman's readers).

Read at multiple levels

One thing I tell people to do is try to be doing different things at different kind of levels.

Make sure to read things that are relatively easy as well as things that are more stretching. This provides more input which is what we really need to learn Greek. Perhaps 70% of your study time on is spent easier materials (A Greek reader or reading and re-reading the New Testament) and a smaller portion on a more challenging Classical text or one of the Church Fathers.

Learn to ask questions in Greek

Developing the ability to ask ... and answer simple questions so that you can talk about the text in the language. Who is this person? What are they doing? Where are they?

This helps you stay in the language when working with a text. (Check this out for examples on how to do this.)

Reading out loud and writing

Reading out loud early on can be helpful as it involves multiple senses. Recoding yourself can be helpful so that you have something to do when you can't sit down and read. Or you can find someone else and swap recordings with them "if you can't stand the sound of your own voice."

Seumas would also encourage people to start writing earlier than they think they should by writing a few sentences or summarizing a text in a simpler way.

Resources

Here are some of the resources mentioned in this interview or in the post.

Dec 10, 2019

Fun with vocab-tools: vocab info for a book

More fun with James Tauber's vocabulary-tools. I'm trying to read the whole NT in Greek. Titus is next. I started reading it, but there was a lot of unfamiliar vocab or at least vocab I didn't feel certain of. Vocabulary-tools to the rescue again. Sure I could buy a readers Greek New Testament, but where's the fun in that? Also using vocabulary tools lets me customize what words are added to the list.

from collections import Counter
from gnt_data import get_tokens, get_tokens_by_chunk, TokenType, ChunkType
from abott_glosser import Glosser
from ref_tools import get_book
import sys

# Get all lemmas in GNT
gnt_lemmas = Counter(get_tokens(TokenType.lemma))

# Get lemmas for chapter
NEW_CHAPTER = Counter(get_tokens(TokenType.lemma, ChunkType.book, get_book("TIT", 60)))

# get GNT freq, rather than freq in current chatper
def getNTFreq(nt, tgt):
    out = {}
    for t in tgt.items():
        lemma = t[0]
        if lemma in nt:
            out[lemma] = nt[lemma]
    return out

#subtract vocab from the last chatper from list
ACT_NT_FREQ = getNTFreq(gnt_lemmas, NEW_CHAPTER)

# Filter lemmas based on those that occur less than LIM in the GNT as a whole
LIM = 10
freq = lambda x: int(x[1]) < LIM
TGT = sorted(list(filter(freq,ACT_NT_FREQ.items())), key=lambda x: x[0])

# setup glosser
glosser = Glosser("custom-glosses.tab")

# output results
for l in TGT:
    print(f"{l[0]}\t{l[1]}\t{glosser.get(l[0])}")

By running py get_chatper.py > titus_vocab.txt I now have a vocab list. Now I can print the list and stick it in my GNT for easy access. In theory I could also keep track of this list and filter these out when I move on to the next book. Or filter out those that I have only seen a certain number of times. Also by tweaking the print line to print(f"{l[0]}\t{glosser.get(l[0])}"), the file could be imported into Anki and boom! Instant flashcards.

Nov 22, 2019

Fun with vocab-tools: comparing chapter vocab and glossing it

More fun with James Tauber's vocabulary-tools. So what if you're reading through an NT book chapter by chapter and you wonder what new vocab you're likely to encounter in the next chapter that wasn't in the previous one? Vocabulary-tools can help you figure that out.

Vocabulary-tools doesn't include a glossing tool (as far as I know), but here is a simple one based on a gloss list from the Abbott-Smith NT Greek lexicon (which you can get here).

from greek_normalisation.utils import nfc

class Glosser():
    def __init__(self):
        self.data = dict()
        with open("gloss-dict.tab", 'r', encoding="UTF-8") as f:
            for line in f:
                parts = line.split("\t", maxsplit=1)
                if len(parts) > 1:
                    self.data[nfc(parts[0])] = parts[1]

    def get(self, l):
        normed = nfc(l)
        if normed in self.data:
            return self.data[normed]
        else:
            print(f"{normed} not found in Abott Smith")
            return ''

Now we can combine that with the following code and run by typing py analyze_chapter.py <new-cpt-num> and it will print out a list of words that occur less than LIM times in the NT, the number of occurrences, and the gloss from Abbott-Smith (if found). I'm currently reading Acts; if you want a different book, then you'll need to replace BOOK_ABBV['ACT'] with the book code for the book you want to read. You can figure out this code from the vocabular-tools module.

from collections import Counter
from gnt_data import get_tokens, get_tokens_by_chunk, TokenType, ChunkType
from abott_glosser import Glosser
import sys

new_cpt = int(sys.argv[1])
BOOK_ABBV = {"GLA": "69", "1JN" : "83", "ACT": "65"}

# Get all lemmas in GNT
gnt_lemmas = Counter(get_tokens(TokenType.lemma))

# format last chatper marker
last_cpt = "0" + str(new_cpt -1) if new_cpt -1 < 10 else str(new_cpt -1)

# Get lemmmas for current and previous chapters
LAST_CHAPTER =  Counter(get_tokens(TokenType.lemma, ChunkType.chapter, BOOK_ABBV['ACT']+ last_cpt))
NEW_CHAPTER = Counter(get_tokens(TokenType.lemma, ChunkType.chapter, BOOK_ABBV['ACT']+ str(new_cpt)))

# get GNT freq, rather than freq in current chatper
def getNTFreq(nt, tgt):
    out = {}
    for t in tgt.items():
        lemma = t[0]
        if lemma in nt:
            out[lemma] = nt[lemma]
    return out

#subtract vocab from the last chatper from list
ACT_NT_FREQ = getNTFreq(gnt_lemmas, NEW_CHAPTER - LAST_CHAPTER)

# Filter lemmas based on those that occur less than LIM in the GNT as a whole
LIM = 10
freq = lambda x: int(x[1]) < LIM
TGT = sorted(list(filter(freq,ACT_NT_FREQ.items())), key=lambda x: x[0])

print(len(TGT))

# setup glosser
glosser = Glosser()

# output results
for l in TGT:
    print(f"{l[0]}\t{l[1]}\t{glosser.get(l[0])}")

Running py analyze_chapter.py 11 on Acts 11 got the following output.

21
Κλαύδιος        3       Claudius | C. Lysias

Κυρηναῖος       6       of Cyrene | a Cyrenæan

Κύπριος 3       of Cyprus | Cyprian

Κύπρος  5       Cyprus

Στέφανος        7       Stephen

Ταρσός  3       Tarsus

Φοινίκη not found in Abott Smith
Φοινίκη 3
Χριστιανός      3       a Christian

διασπείρω       3       to scatter abroad, disperse

εὐπορέομαι not found in Abott Smith
εὐπορέομαι      1
καθεξῆς 5       successively | in order | afterwards

προσμένω        7       to wait longer | continue | remain still | to remain with | to remain attached to | cleave unto | abide in

πρώτως  1       first

σημαίνω 6       to give a sign, signify, indicate

ἀναζητέω        3       to look for | seek carefully

ἀνασπάω 2       to draw up

Ἅγαβος not found in Abott Smith
Ἅγαβος  2
ἐκτίθημι        4       to set out, expose | to set forth, expound

Ἑλληνιστής      3       a Hellenist |  Grecian Jew

ἡσυχάζω 5       to be still | to rest from labour | to live quietly | to be silent

ἴσος    8       equal | the same

Nov 19, 2019

More fun with JTauber's vocab tools: Finding verses and pericopes with shared vocab

Vocabulary acquisition requires repeated exposure to a word in order for our brains to acquire that word. In other words, we need encounter a given word repeatedly to acquire it. Reading texts that cover similar topics is a great way to do this. Since the topic is similar, the likelihood that there will be repeated vocabulary between the texts is higher.

For those of us interested in New Testament Greek and acquiring vocabulary, reading the GNT would be a good way to do this. Read the whole thing and you will certainly have acquired a good deal of vocab. But sometimes, biblical texts don't address the same topic with enough repetition for us to naturally get the repeated exposure we need to acquire a word within a short period of time.

What if we could read passages that have a high degree of shared vocab? That should provide the repetition. But how do we find these passages?

Enter the dragon... I mean, enter James Tauber's vocabulary tools for the GNT.

The code

The following code loops through each verse in the GNT and then gets set of all lemmas found there. It then loops through every verse in the GNT, and figures out what lemmas are not common to those two verses. If the number lemmas that aren't shared is below a given limit (in this case 5), it saves them to be output.

from gnt_data import get_tokens, get_tokens_by_chunk, TokenType, ChunkType
from greekutils.verse_ref import bcv_to_verse_ref

reffer = lambda x: bcv_to_verse_ref(x, start=61)


gnt_verses = get_tokens_by_chunk(TokenType.lemma, ChunkType.verse)

commons = dict()
LIM = 5
for verse, lemma in gnt_verses.items():
    print(reffer(verse))
    verse_set = set(lemma)
    for v, l in gnt_verses.items():
        if v == verse:
            continue
        vset = set(l)
        u = verse_set.union(vset)
        intr = verse_set.intersection(vset)
        not_common = u - intr
        if len(not_common) < LIM:
            if verse in commons:
                commons[verse].append(v)
            else:
                commons[verse] = [v]
with open("common_list_verses.txt", 'w') as g:
    for k,v in commons.items():
        print(reffer(k), file=g)
        for i in v:
            print("\t" + reffer(i), file=g)
print("DONE!")

Here's a snippet of the results:

Matt 4:14
    Matt 2:17
    Matt 12:17
    Matt 21:4

Now let's compare them (Greek text taken from [1]):

Matt 4:14 is ἵνα πληρωθῇ τὸ ῥηθὲν διὰ Ἠσαΐου τοῦ προφήτου λέγοντος·

  • Matt 2:17 – τότε ἐπληρώθη τὸ ῥηθὲν ⸀διὰ Ἰερεμίου τοῦ προφήτου λέγοντος
  • Matt 12:17 – ⸀ἵνα πληρωθῇ τὸ ῥηθὲν διὰ Ἠσαΐου τοῦ προφήτου λέγοντος·
  • Matt 21:4 –Τοῦτο ⸀δὲ γέγονεν ἵνα πληρωθῇ τὸ ῥηθὲν διὰ τοῦ προφήτου λέγοντος·

What about larger units of text

Ok, but who wants to skip around reading random verses? By making a few tweaks to the code above we can compare pericopes.

gnt_verses = get_tokens_by_chunk(TokenType.lemma, ChunkType.pericope)
...

LIM = 10
...
with open("common_list_pericope.txt", 'w') as g:
    for k,v in commons.items():
        print(k, file=g)
        for i in v:
            print("\t" + i, file=g)

Which returns the following passages. I had to write some extra code to convert the pericope codes into the normal passage references so you'll want this file and this file if you want to run this part yourself.

Mark 10:13 - Mark 10:16
    Luke 18:15 - Luke 18:17
Luke 18:15 - Luke 18:17
    Mark 10:13 - Mark 10:16
Eph 1:1 - Eph 1:2
    Col 1:1 - Col 1:2
Col 1:1 - Col 1:2
    Eph 1:1 - Eph 1:2

By changing LIM to 15 we get the following list.

Mark 10:13 - Mark 10:16
    Luke 18:15 - Luke 18:17
Luke 18:15 - Luke 18:17
    Mark 10:13 - Mark 10:16
Eph 1:1 - Eph 1:2
    Phil 1:1 - Phil 1:2
    Col 1:1 - Col 1:2
    2 Thess 1:1 - 2 Thess 1:2
    2 Tim 1:1 - 2 Tim 1:2
Phil 1:1 - Phil 1:2
    Eph 1:1 - Eph 1:2
    Col 1:1 - Col 1:2
    2 Thess 1:1 - 2 Thess 1:2
Col 1:1 - Col 1:2
    Eph 1:1 - Eph 1:2
    Phil 1:1 - Phil 1:2
    2 Thess 1:1 - 2 Thess 1:2
    2 Tim 1:1 - 2 Tim 1:2
2 Thess 1:1 - 2 Thess 1:2
    Eph 1:1 - Eph 1:2
    Phil 1:1 - Phil 1:2
    Col 1:1 - Col 1:2
    1 Tim 1:1 - 1 Tim 1:2
    2 Tim 1:1 - 2 Tim 1:2
    Phlm 1:1 - Phlm 1:3
1 Tim 1:1 - 1 Tim 1:2
    2 Thess 1:1 - 2 Thess 1:2
    2 Tim 1:1 - 2 Tim 1:2
2 Tim 1:1 - 2 Tim 1:2
    Eph 1:1 - Eph 1:2
    Col 1:1 - Col 1:2
    2 Thess 1:1 - 2 Thess 1:2
    1 Tim 1:1 - 1 Tim 1:2
Phlm 1:1 - Phlm 1:3
    2 Thess 1:1 - 2 Thess 1:2

κ.τ.λ.

ChunkType could also be changed to chapter if you'd like to compare chapters.

All of the above uses lemmas. If you are interested in forms, then simply replacing TokenType.lemma form TokenType.form in this line will do the trick.

gnt_verses = get_tokens_by_chunk(TokenType.form, ChunkType.pericope)

I doubt this will change your life as a student or as a teacher, but it is certainly interesting to know which verses or passages share vocabulary. This could help us develop better reading assignments for students or direct us to which passages could be interesting reading to grow our own vocabulary.


[1]: Michael W. Holmes, The Greek New Testament: SBL Edition (Lexham Press; Society of Biblical Literature, 2011–2013)

Nov 14, 2019

Fun with James Tauber's vocabulary tools

James Tauber has written a set of vocabulary tools for the Greek New Testament (GNT).

I wanted to read Acts 10 and thought I'd see what the words occur there that occur less than 10 times in the GNT overall. The code in Listing 1 will get that list and print the word and its total GNT count to a text file.

Listing 1

from collections import Counter
from gnt_data import get_tokens, get_tokens_by_chunk, TokenType, ChunkType
import pprint

BOOK_ABBV = {"GLA": "69", "1JN" : "83", "ACT": "65"}

gnt_lemmas = Counter(get_tokens(TokenType.lemma))

ACT_10_lemmas = Counter(get_tokens(TokenType.lemma, ChunkType.chapter, BOOK_ABBV['ACT']+ "10"))


def getNTFreq(nt, tgt):
    out = {}
    for t in tgt.items():
        lemma = t[0]
        if lemma in nt:
            out[lemma] = nt[lemma]
    return out

ACT_NT_FREQ = getNTFreq(gnt_lemmas, ACT_10_lemmas)

freq = lambda x: int(x[1]) < 10



TGT = sorted(list(filter(freq,ACT_NT_FREQ.items())), key=lambda x: x[0])

pprint.pprint(TGT)

print(len(TGT))

with open("act_10.txt", 'w', encoding="UTF-8") as f:
    for l in TGT:
        print(f"{l[0]}\t\t{l[1]}", file=f)
print("Done!")

I then wanted the glosses for these words.

I have a list of glosses extracted from the Abbot-Smith NT Greek lexicon (the list is available here). So I wrote some code to read the output from the previous file, grab the glosses, and add them to the file.

Listing 2

import sys

GLOSSES = {}

with open('gloss-dict.tab', 'r', encoding="UTF-8") as f:
  for l in f:
    parts = l.strip().split("\t", maxsplit=2)
    if len(parts) > 1:
        GLOSSES[parts[0]] = parts[1]

ARGS = sys.argv[1:]

with open(ARGS[0], 'r', encoding="UTF-8") as f:
    with open(ARGS[1], 'w', encoding="UTF-8") as g:
        for l in f:
            word = l.strip().split("\t", maxsplit=1)
            if word[0] in GLOSSES:
                rest = "\t".join(word[1:])
                print(f"{word[0]}\t{GLOSSES[word[0]]}\t{rest}", file=g)

I printed the resulting file out and I'm off reading. It's nice to a have cheat sheet of less common vocab for the chapter.

Nov 14, 2019

Pronunciation video in Koine Greek

Here's my first shot a making a video about the pronunciation I use for Koine Greek. I"m sure I butcher accents and stumble over plenty of words. But here it is anyway. It uses some vocab that I found reading Dionysis Thrax's Τέχνη Γραμματική (Art of Grammar??).

ἀποτελεῖ 'it produces' was interesting to me as it is used to talk about the sound a letter makes.

Click here to watch

Click here to listen to or download the audio

Jun 12, 2019

Learning Syriac - project index

Syriac is a Semitic language in the same family as Aramaic. It also has one of the earliest transitions the New Testament and is a Christian literary language as well.

It's related to the Aramaic that Jesus and the Apostles would have grown up speaking...

... and it's not widely known ...

... and doesn't have near the resources available as do Greek, Hebrew, or Latin.

My hope is that this series of posts will help throw some light on how to learn it by describing my experience as I'm doing just that.

Consider this an experiment and a test to see if some of the techniques that I have found effective with Koine Greek work for another ancient language.

Principles

These are some of the principles that guide how I'm going to go about it.

  1. Input over grammar
  2. Cloze grammar cards
  3. Spaced, repeated reading

Input over grammar

In other words, time spent reading Syriac will be more effective than time spent reading about Syriac.

As long as what I'm reading is qualifies as comprehensible input (i.e. at least 95% understandable).

The problem with classical languages is finding material that qualifies as comprehensible.

Since I'm a beginner, my plan is to start by adding the sentences from my text book to an Anki flashcard deck. I'll make sure that I understand the translation of these sentences before I add them.

Because the Syriac writing system is quite different from what I'm used to and has a lot of silent letters. I'm going to include a rough phonetic transcription on the back of each card along with the translation.

Another trick I learned is to bold new vocabulary that you want to learn. This will be more useful after I'm more comfortable reading and don't need a full translation on each card. At that point, I'll stop adding full translations and just note the meaning of the bolded words.

Cloze grammar cards

For more details see this post.

The basic idea is to use cloze deletions to become more familiar with the paradigms.

These seem to work really well for me to become comfortable with how a paradigm works.

This is in tension with the input over grammar principle above, but I find such familarity to be helpful.

Again, not memorizing to memorize, but familiarity.

Spaced, repeated reading

See this post for more details.

In short the idea is to use spaced repetition to schedule rereading material that qualifies as comprehensible input.

This is accomplished by using a spaced repetition software.

For me this is Anki.

At the beginning, the cards will simple be the sentences from my textbooks. Once I'm more comfortable, then the cards will probably be links, book + page numbers, or references to the material that I want to read.

What's next

As I write other posts about learning Syriac, I'll add links to them here.

...

Jun 10, 2019

Spaced, repeated reading

Spaced repetition software is perhaps the most effective way to study using flashcards. The idea is that we review new material right before we forget it, then we wait longer before reviewing it again.

Lather, rinse, repeat.

Each review, strengthens the memory.

Comprehensible input

Language, though, is learned through interaction with material that is at least ~95% comprehensible, rather than like a collection of facts to memorize. This can happen via reading, listening, conversations etc.

We need a lot of input for the brain to acquire language.

It's simply how the brain works.

For Greek, this means lots of reading (it is a classical language after all), but reading at or near one's current level.

Repeated exposure to comprehensible input, allows out brain to acquire the words and grammar in that input naturally.

But we need repeated exposure.

Could we perhaps combine spaced repetition with reading of comprehensible input to get the needed repetition?

Yep.

How to combine them

Here's how I've done this.

I have two decks. One for shorter sentences that have words that I specifically want to learn. A second deck that just has the "address" of what I want to read.

In the first deck, I make a card with the sentence I want to review and bold the words I want to learn. Then I put the meanings of the new words on the back of the card.

For the second deck, I make a card that tells me what to read. I don't put anything else on these cards. For example, I have some cards that say I Clem 17 (1st Clement, chapter 17) or Rom 6 or I Clem 20:1-3.

The second deck takes much more time to review and I need access to the books I'm reading. The first deck I can study on the go or whenever I have a few minutes.

For the second deck, I'd recommend creating cards with smaller chunks of text to read. So instead of a card that says I John 1, you might consider I John 1:1-4 and then making other cards for the rest of the chapter.

This means that you don't need as much time to reread the material and are more likely to keep at it.

Of course, you don't need a spaced repetition app to do this. The idea is simply to read and reread the same texts. This let's you learn grammar and vocab in a natural way.

The trouble is finding texts to read that are level appropriate, but that is a can of worms for another post ... or Ph.D. dissertation ;-).

Next → Page 1 of 3