A Mind for Language

Dec 20, 2019

Learning Greek, an interview with Seumas Macdonald

On 18/19 Nov 2019 (depending on your time zone) Dr. Seumas Macdonald and I had a very enjoyable conversation about Ancient and Koine Greek and how he got to his current level in the language.

Seumas is a Greek teacher and researcher who runs The Patrologist.com. He is working on a number of projects to create Greek teaching/learning materials that reflect what linguistics has taught us about how humans learn languages. He also teaches and tutors Greek online.

This post is a summary of the interview. It starts with Seumas's Greek journey and ends with some recommendations for learners. You can listen to our conversation here.

In the course of our conversation he repeated that it took him a lot longer than it needed to because of the methods used and that with proper Comprehensible Input (CI) based methods it shouldn't take a learner near as long.

Seumas's journey into Greek

It starts

Like many of us, Seumas started in traditional grammar-translation (GT) classes. He began with Mounce's Basics of Biblical Greek which he worked through on his own before entering a theological degree. In that program he was required to take four years of Greek. The first year was a reprisal of what was in Mounce's book; the second year moved on to Wallace's Greek Grammar Beyond the Basics; and the last two years were exegesis classes of the Greek text of the New Testament (GNT).

During this time he used electronic flashcards to learn all GNT vocab occurring 3-4 times or more. He also began studying Latin on the side and got additional Greek input from taking outside classes on works written in the Attic dialect.

Journey to CI

Seumas's journey to Comprehensible Input based teaching and learning started with Lingua Latin Per Se Illustrata (LLPSI). Through this book he realized that there are more effective other ways to learn languages than the GT method. He read posts in online Latin teaching forums and began considering how to apply it to Greek. There was less communication about CI methods and materials at that time, but he grabbed some of Randall Buth's materials from The Biblical Language Center.

While working in Mongolia, he had the experience of learning a modern language. His reading of the research in language acquisition helped him understand what to focus on.

So I did almost no homework in my classes. I didn't do any translation exercises. I didn't do any flashcards for Mongolian. I just tried to what I believed would be useful which would be getting exposed to Mongolian as much as possible.

These experiences helped shape his understanding of the effectiveness of CI and how to apply it. He also took some Greek classes online from sources such as Michael Halcombe's Conversational Koine Institute.

Last few years

Starting in 2015, Seumas began tutoring Greek and Latin and doing more conversational stuff online. He also attended some Salvi Rusicatios (Living Latin weeks/weekends). These helped him realize that he could hold and lead a conversation in Greek. He could keep instruction in Greek without the need for English. He's also been involved in some online Greek chats and various reading groups where they read and discuss a text in Greek.

Summary

Seumas has taken a variety of classes, some of which used the GT approach and some were more CI based. He notes that the experience of reading a lot in these classes helped grow his skills and that the communicative classes were more enjoyable and useful. Over time he's moved toward more CI based activities such as online chats, reading groups, and events that were held in the language.

How does CI shape what he does?

I asked Seumas how CI shapes what he does,

... I enjoy grammar, I enjoy linguistics, I enjoy understanding how language works, but knowing that and ... being pretty convinced that acquisition happens when there's input and that input is understandable kind of drags me back and says "What are you doing with your time and what are you doing with your students?" There's a place for grammar and a place for explanation, but the bulk of what I should be doing, both as an ongoing learner, but also as a teacher, is getting input for myself and creating input for others. That's what's going to drive acquisition.

Pushing farther

I also asked Seumas what he does to further his own Greek skills. He said that it's driven largely by the "vicissitudes" of life. He teaches whatever his students need to learn. One day he may be working through a classical text with his students or he may be prepping them for a traditional GT style final exam for a GNT class.

He also has regular online video chats which allow him to speak at a higher level that he normally does with his students. He also tries to read things that interest him. This includes easy stuff such as textbooks and Greek readers.

What he's excited about currently

Seumas is currently working on a project called Lingua Graeca Per Se Illustrata.

It's essentially ... an open ended writing project to create as much text that flattens out the curve for anyone reading Koine or Attic to read interesting stories that introduce grammar and vocabulary gradually and give meaningful, comprehensible repetition.

He views this as a "shared universe of Greek" that people can read and contribute to it. He is intentionally structing the materials and thinking out what readers need to be exposed to and in what order in terms of both vocab and grammar.

He's also working on a podcast, but notes that it's currently on "an unscheduled, long hiatus".

Recommendations

Seumas offers the following advice:

I wouldn't tell people to do things the way I did them because it took too long to get where I am and I think people can get there much faster if they're a bit smarter about what they spend their time on... I think the best use of time is to be reading, reading a lot, reading things that are relatively easy. If it's an intensive type of reading experience where you have to look up a lot of words, you're not getting as much exposure as you would by reading a lot of easy stuff.

He also recommends listening to stuff, but notes that sadly there isn't as much Greek audio materials as there are Latin at the moment.

How to move into more difficult texts

My main question for Seumas was about the transition from New Testament Greek to Classical or other Koine materials. From my perspective one of the main problems is the quantity of new vocabulary. He said texts outside the GNT tend to be written in a higher, literary register so "I think you try to flatten the gradient as much as possible" between what you're already comfortable with and the target texts. "...in one sense the vocab problem never really goes away," and he goes on to explain that Homeric Greek is a challenge for him because the vocab is different so he needs to look up a lot it when reading.

Flatten the gradient

To flatten this gradient Seumas recommends choosing texts "that will make that gap as small as possible."

So from the GNT one might proceed to the Apostolic Fathers. The Didache could be a good place to start because of its similarity to the GNT, but First and Second Clement might be more difficult.

Seumas said,

I tell people to cheat as much as they need to; that is it's always find to be looking things up or using whatever resources you can to make things as intelligible as quickly as possible.

Greek readers can help here (check out Steadman's readers).

Read at multiple levels

One thing I tell people to do is try to be doing different things at different kind of levels.

Make sure to read things that are relatively easy as well as things that are more stretching. This provides more input which is what we really need to learn Greek. Perhaps 70% of your study time on is spent easier materials (A Greek reader or reading and re-reading the New Testament) and a smaller portion on a more challenging Classical text or one of the Church Fathers.

Learn to ask questions in Greek

Developing the ability to ask ... and answer simple questions so that you can talk about the text in the language. Who is this person? What are they doing? Where are they?

This helps you stay in the language when working with a text. (Check this out for examples on how to do this.)

Reading out loud and writing

Reading out loud early on can be helpful as it involves multiple senses. Recoding yourself can be helpful so that you have something to do when you can't sit down and read. Or you can find someone else and swap recordings with them "if you can't stand the sound of your own voice."

Seumas would also encourage people to start writing earlier than they think they should by writing a few sentences or summarizing a text in a simpler way.

Resources

Here are some of the resources mentioned in this interview or in the post.

Dec 10, 2019

Fun with vocab-tools: vocab info for a book

More fun with James Tauber's vocabulary-tools. I'm trying to read the whole NT in Greek. Titus is next. I started reading it, but there was a lot of unfamiliar vocab or at least vocab I didn't feel certain of. Vocabulary-tools to the rescue again. Sure I could buy a readers Greek New Testament, but where's the fun in that? Also using vocabulary tools lets me customize what words are added to the list.

from collections import Counter
from gnt_data import get_tokens, get_tokens_by_chunk, TokenType, ChunkType
from abott_glosser import Glosser
from ref_tools import get_book
import sys

# Get all lemmas in GNT
gnt_lemmas = Counter(get_tokens(TokenType.lemma))

# Get lemmas for chapter
NEW_CHAPTER = Counter(get_tokens(TokenType.lemma, ChunkType.book, get_book("TIT", 60)))

# get GNT freq, rather than freq in current chatper
def getNTFreq(nt, tgt):
    out = {}
    for t in tgt.items():
        lemma = t[0]
        if lemma in nt:
            out[lemma] = nt[lemma]
    return out

#subtract vocab from the last chatper from list
ACT_NT_FREQ = getNTFreq(gnt_lemmas, NEW_CHAPTER)

# Filter lemmas based on those that occur less than LIM in the GNT as a whole
LIM = 10
freq = lambda x: int(x[1]) < LIM
TGT = sorted(list(filter(freq,ACT_NT_FREQ.items())), key=lambda x: x[0])

# setup glosser
glosser = Glosser("custom-glosses.tab")

# output results
for l in TGT:
    print(f"{l[0]}\t{l[1]}\t{glosser.get(l[0])}")

By running py get_chatper.py > titus_vocab.txt I now have a vocab list. Now I can print the list and stick it in my GNT for easy access. In theory I could also keep track of this list and filter these out when I move on to the next book. Or filter out those that I have only seen a certain number of times. Also by tweaking the print line to print(f"{l[0]}\t{glosser.get(l[0])}"), the file could be imported into Anki and boom! Instant flashcards.

Nov 22, 2019

Fun with vocab-tools: comparing chapter vocab and glossing it

More fun with James Tauber's vocabulary-tools. So what if you're reading through an NT book chapter by chapter and you wonder what new vocab you're likely to encounter in the next chapter that wasn't in the previous one? Vocabulary-tools can help you figure that out.

Vocabulary-tools doesn't include a glossing tool (as far as I know), but here is a simple one based on a gloss list from the Abbott-Smith NT Greek lexicon (which you can get here).

from greek_normalisation.utils import nfc

class Glosser():
    def __init__(self):
        self.data = dict()
        with open("gloss-dict.tab", 'r', encoding="UTF-8") as f:
            for line in f:
                parts = line.split("\t", maxsplit=1)
                if len(parts) > 1:
                    self.data[nfc(parts[0])] = parts[1]

    def get(self, l):
        normed = nfc(l)
        if normed in self.data:
            return self.data[normed]
        else:
            print(f"{normed} not found in Abott Smith")
            return ''

Now we can combine that with the following code and run by typing py analyze_chapter.py <new-cpt-num> and it will print out a list of words that occur less than LIM times in the NT, the number of occurrences, and the gloss from Abbott-Smith (if found). I'm currently reading Acts; if you want a different book, then you'll need to replace BOOK_ABBV['ACT'] with the book code for the book you want to read. You can figure out this code from the vocabular-tools module.

from collections import Counter
from gnt_data import get_tokens, get_tokens_by_chunk, TokenType, ChunkType
from abott_glosser import Glosser
import sys

new_cpt = int(sys.argv[1])
BOOK_ABBV = {"GLA": "69", "1JN" : "83", "ACT": "65"}

# Get all lemmas in GNT
gnt_lemmas = Counter(get_tokens(TokenType.lemma))

# format last chatper marker
last_cpt = "0" + str(new_cpt -1) if new_cpt -1 < 10 else str(new_cpt -1)

# Get lemmmas for current and previous chapters
LAST_CHAPTER =  Counter(get_tokens(TokenType.lemma, ChunkType.chapter, BOOK_ABBV['ACT']+ last_cpt))
NEW_CHAPTER = Counter(get_tokens(TokenType.lemma, ChunkType.chapter, BOOK_ABBV['ACT']+ str(new_cpt)))

# get GNT freq, rather than freq in current chatper
def getNTFreq(nt, tgt):
    out = {}
    for t in tgt.items():
        lemma = t[0]
        if lemma in nt:
            out[lemma] = nt[lemma]
    return out

#subtract vocab from the last chatper from list
ACT_NT_FREQ = getNTFreq(gnt_lemmas, NEW_CHAPTER - LAST_CHAPTER)

# Filter lemmas based on those that occur less than LIM in the GNT as a whole
LIM = 10
freq = lambda x: int(x[1]) < LIM
TGT = sorted(list(filter(freq,ACT_NT_FREQ.items())), key=lambda x: x[0])

print(len(TGT))

# setup glosser
glosser = Glosser()

# output results
for l in TGT:
    print(f"{l[0]}\t{l[1]}\t{glosser.get(l[0])}")

Running py analyze_chapter.py 11 on Acts 11 got the following output.

21
Κλαύδιος        3       Claudius | C. Lysias

Κυρηναῖος       6       of Cyrene | a Cyrenæan

Κύπριος 3       of Cyprus | Cyprian

Κύπρος  5       Cyprus

Στέφανος        7       Stephen

Ταρσός  3       Tarsus

Φοινίκη not found in Abott Smith
Φοινίκη 3
Χριστιανός      3       a Christian

διασπείρω       3       to scatter abroad, disperse

εὐπορέομαι not found in Abott Smith
εὐπορέομαι      1
καθεξῆς 5       successively | in order | afterwards

προσμένω        7       to wait longer | continue | remain still | to remain with | to remain attached to | cleave unto | abide in

πρώτως  1       first

σημαίνω 6       to give a sign, signify, indicate

ἀναζητέω        3       to look for | seek carefully

ἀνασπάω 2       to draw up

Ἅγαβος not found in Abott Smith
Ἅγαβος  2
ἐκτίθημι        4       to set out, expose | to set forth, expound

Ἑλληνιστής      3       a Hellenist |  Grecian Jew

ἡσυχάζω 5       to be still | to rest from labour | to live quietly | to be silent

ἴσος    8       equal | the same

Nov 19, 2019

More fun with JTauber's vocab tools: Finding verses and pericopes with shared vocab

Vocabulary acquisition requires repeated exposure to a word in order for our brains to acquire that word. In other words, we need encounter a given word repeatedly to acquire it. Reading texts that cover similar topics is a great way to do this. Since the topic is similar, the likelihood that there will be repeated vocabulary between the texts is higher.

For those of us interested in New Testament Greek and acquiring vocabulary, reading the GNT would be a good way to do this. Read the whole thing and you will certainly have acquired a good deal of vocab. But sometimes, biblical texts don't address the same topic with enough repetition for us to naturally get the repeated exposure we need to acquire a word within a short period of time.

What if we could read passages that have a high degree of shared vocab? That should provide the repetition. But how do we find these passages?

Enter the dragon... I mean, enter James Tauber's vocabulary tools for the GNT.

The code

The following code loops through each verse in the GNT and then gets set of all lemmas found there. It then loops through every verse in the GNT, and figures out what lemmas are not common to those two verses. If the number lemmas that aren't shared is below a given limit (in this case 5), it saves them to be output.

from gnt_data import get_tokens, get_tokens_by_chunk, TokenType, ChunkType
from greekutils.verse_ref import bcv_to_verse_ref

reffer = lambda x: bcv_to_verse_ref(x, start=61)


gnt_verses = get_tokens_by_chunk(TokenType.lemma, ChunkType.verse)

commons = dict()
LIM = 5
for verse, lemma in gnt_verses.items():
    print(reffer(verse))
    verse_set = set(lemma)
    for v, l in gnt_verses.items():
        if v == verse:
            continue
        vset = set(l)
        u = verse_set.union(vset)
        intr = verse_set.intersection(vset)
        not_common = u - intr
        if len(not_common) < LIM:
            if verse in commons:
                commons[verse].append(v)
            else:
                commons[verse] = [v]
with open("common_list_verses.txt", 'w') as g:
    for k,v in commons.items():
        print(reffer(k), file=g)
        for i in v:
            print("\t" + reffer(i), file=g)
print("DONE!")

Here's a snippet of the results:

Matt 4:14
    Matt 2:17
    Matt 12:17
    Matt 21:4

Now let's compare them (Greek text taken from [1]):

Matt 4:14 is ἵνα πληρωθῇ τὸ ῥηθὲν διὰ Ἠσαΐου τοῦ προφήτου λέγοντος·

  • Matt 2:17 – τότε ἐπληρώθη τὸ ῥηθὲν ⸀διὰ Ἰερεμίου τοῦ προφήτου λέγοντος
  • Matt 12:17 – ⸀ἵνα πληρωθῇ τὸ ῥηθὲν διὰ Ἠσαΐου τοῦ προφήτου λέγοντος·
  • Matt 21:4 –Τοῦτο ⸀δὲ γέγονεν ἵνα πληρωθῇ τὸ ῥηθὲν διὰ τοῦ προφήτου λέγοντος·

What about larger units of text

Ok, but who wants to skip around reading random verses? By making a few tweaks to the code above we can compare pericopes.

gnt_verses = get_tokens_by_chunk(TokenType.lemma, ChunkType.pericope)
...

LIM = 10
...
with open("common_list_pericope.txt", 'w') as g:
    for k,v in commons.items():
        print(k, file=g)
        for i in v:
            print("\t" + i, file=g)

Which returns the following passages. I had to write some extra code to convert the pericope codes into the normal passage references so you'll want this file and this file if you want to run this part yourself.

Mark 10:13 - Mark 10:16
    Luke 18:15 - Luke 18:17
Luke 18:15 - Luke 18:17
    Mark 10:13 - Mark 10:16
Eph 1:1 - Eph 1:2
    Col 1:1 - Col 1:2
Col 1:1 - Col 1:2
    Eph 1:1 - Eph 1:2

By changing LIM to 15 we get the following list.

Mark 10:13 - Mark 10:16
    Luke 18:15 - Luke 18:17
Luke 18:15 - Luke 18:17
    Mark 10:13 - Mark 10:16
Eph 1:1 - Eph 1:2
    Phil 1:1 - Phil 1:2
    Col 1:1 - Col 1:2
    2 Thess 1:1 - 2 Thess 1:2
    2 Tim 1:1 - 2 Tim 1:2
Phil 1:1 - Phil 1:2
    Eph 1:1 - Eph 1:2
    Col 1:1 - Col 1:2
    2 Thess 1:1 - 2 Thess 1:2
Col 1:1 - Col 1:2
    Eph 1:1 - Eph 1:2
    Phil 1:1 - Phil 1:2
    2 Thess 1:1 - 2 Thess 1:2
    2 Tim 1:1 - 2 Tim 1:2
2 Thess 1:1 - 2 Thess 1:2
    Eph 1:1 - Eph 1:2
    Phil 1:1 - Phil 1:2
    Col 1:1 - Col 1:2
    1 Tim 1:1 - 1 Tim 1:2
    2 Tim 1:1 - 2 Tim 1:2
    Phlm 1:1 - Phlm 1:3
1 Tim 1:1 - 1 Tim 1:2
    2 Thess 1:1 - 2 Thess 1:2
    2 Tim 1:1 - 2 Tim 1:2
2 Tim 1:1 - 2 Tim 1:2
    Eph 1:1 - Eph 1:2
    Col 1:1 - Col 1:2
    2 Thess 1:1 - 2 Thess 1:2
    1 Tim 1:1 - 1 Tim 1:2
Phlm 1:1 - Phlm 1:3
    2 Thess 1:1 - 2 Thess 1:2

κ.τ.λ.

ChunkType could also be changed to chapter if you'd like to compare chapters.

All of the above uses lemmas. If you are interested in forms, then simply replacing TokenType.lemma form TokenType.form in this line will do the trick.

gnt_verses = get_tokens_by_chunk(TokenType.form, ChunkType.pericope)

I doubt this will change your life as a student or as a teacher, but it is certainly interesting to know which verses or passages share vocabulary. This could help us develop better reading assignments for students or direct us to which passages could be interesting reading to grow our own vocabulary.


[1]: Michael W. Holmes, The Greek New Testament: SBL Edition (Lexham Press; Society of Biblical Literature, 2011–2013)

Nov 14, 2019

Fun with James Tauber's vocabulary tools

James Tauber has written a set of vocabulary tools for the Greek New Testament (GNT).

I wanted to read Acts 10 and thought I'd see what the words occur there that occur less than 10 times in the GNT overall. The code in Listing 1 will get that list and print the word and its total GNT count to a text file.

Listing 1

from collections import Counter
from gnt_data import get_tokens, get_tokens_by_chunk, TokenType, ChunkType
import pprint

BOOK_ABBV = {"GLA": "69", "1JN" : "83", "ACT": "65"}

gnt_lemmas = Counter(get_tokens(TokenType.lemma))

ACT_10_lemmas = Counter(get_tokens(TokenType.lemma, ChunkType.chapter, BOOK_ABBV['ACT']+ "10"))


def getNTFreq(nt, tgt):
    out = {}
    for t in tgt.items():
        lemma = t[0]
        if lemma in nt:
            out[lemma] = nt[lemma]
    return out

ACT_NT_FREQ = getNTFreq(gnt_lemmas, ACT_10_lemmas)

freq = lambda x: int(x[1]) < 10



TGT = sorted(list(filter(freq,ACT_NT_FREQ.items())), key=lambda x: x[0])

pprint.pprint(TGT)

print(len(TGT))

with open("act_10.txt", 'w', encoding="UTF-8") as f:
    for l in TGT:
        print(f"{l[0]}\t\t{l[1]}", file=f)
print("Done!")

I then wanted the glosses for these words.

I have a list of glosses extracted from the Abbot-Smith NT Greek lexicon (the list is available here). So I wrote some code to read the output from the previous file, grab the glosses, and add them to the file.

Listing 2

import sys

GLOSSES = {}

with open('gloss-dict.tab', 'r', encoding="UTF-8") as f:
  for l in f:
    parts = l.strip().split("\t", maxsplit=2)
    if len(parts) > 1:
        GLOSSES[parts[0]] = parts[1]

ARGS = sys.argv[1:]

with open(ARGS[0], 'r', encoding="UTF-8") as f:
    with open(ARGS[1], 'w', encoding="UTF-8") as g:
        for l in f:
            word = l.strip().split("\t", maxsplit=1)
            if word[0] in GLOSSES:
                rest = "\t".join(word[1:])
                print(f"{word[0]}\t{GLOSSES[word[0]]}\t{rest}", file=g)

I printed the resulting file out and I'm off reading. It's nice to a have cheat sheet of less common vocab for the chapter.

May 16, 2019

Anki cloze cards for learning paradigms

Comprehensible input is the way to go. The research in second language acquisition indicates that consciously memorized knowledge can't morph into facility with a language in the brain. The brain builds its own black-box model of a language through our interaction with it. This process relies on understanding the meaning of what we read or hear, not on understanding the grammar.

In my experience, having familiarity with the paradigms and tables helps figure out what an unfamiliar form in the text it. At least it helps me think through whether this is a new form or a completely new word. This in turns helps make unfamiliar things more understandable. This increasing my ability to understand is the pay off, not the conscious knowledge of the grammar.

Memorizing tables is a pain and is probably not worth too much effort. Familiarity, however, has been helpful to me. The following is a method to use Anki (free, SRS flashcard software) and cloze deletion flashcards to help with this process of building familiarity.

What is a cloze card?

A cloze card is a flashcard where the piece of information to be learned it blanked out or deleted. This blanked out info is called a "cloze deletion". When studying our task is to recall the missing piece.

Our brain remembers things better if we can form associations between the new information and other things that we know. The more connections, the stronger the memory and the easier the recall.

So instead of trying to memorize and recall a whole paradigm, I propose that we use cloze deletions and blank out only a few pieces of the paradigm.

The advantage is that we are only asking ourselves to recall a few forms at a time. This lowers our stress and makes our minds more open to receiving the information. Also we are seeing the other forms on the table and thus seeing the connections between the current form and the rest of the paradigm.

For example if we wanted to learn or become familiar with the following paradigm:


MascFemNeut
NOM SGεῖςμίαέν
GEN SGενόςμιᾶςενός
DAT SGενίμιᾷενί
ACC SGέναμίανέν

We could create cloze cards as follows in order to learn the Masc and Neut, NOM SG forms:


MascFemNeut
NOM SG[...]μία[...]
GEN SGενόςμιᾶςενός
DAT SGενίμιᾷενί
ACC SGέναμίανέν

Then the following could be used for the Masc and Neut Gen SG forms:


MascFemNeut
NOM SGεῖςμίαέν
GEN SG[...]μιᾶς[...]
DAT SGενίμιᾷενί
ACC SGέναμίανέν

And the following to learn the Fem, Gen SG form:


MascFemNeut
NOM SGεῖς[...]έν
GEN SGενόςμιᾶςενός
DAT SGενίμιᾷενί
ACC SGέναμίανέν

We could proceed by adding cloze deletions for the other pieces of the paradigm that we want to recall. Note that it may not be necessary to create a cloze deletion for every piece of the table. Remember that the goal is familiarity.

Creating tables in Anki

Anki includes a cloze deletion card type. The trick is the table. You can either use spaces or tabs to format the table manually or you can use HTML.

Spaces and tabs

Spaces and tables would be the simplest, but the result may not look as nice. You have to manually line up the columns. I the result would be as follows.

       Masc Fem  Neut
NOM SG εῖς  μία     έν
GEN SG ενός μιᾶς ενός
DAT SG ενί  μιᾷ    ενί
ACC SG ένα  μίαν έν

HTML tables

The following instruction assume you are using the Desktop version of Anki to create the cards. Once you sync your collection to your phone, you can view the html tables there too.

Either you have to write the HTML manually, or you can use an HTML table generator such as tablesgenerator.com.

Once you have your html code, click Add to add a card. Select the cloze card type. Then click on the field under the label Text. After that, click on the three horizontal bars on the right size of the formatting bar. After that select Edit HTML or hit CTRL + SHIFT + X then paste the html code into the pop-up and click close.

Creating the cloze deletions

The following assume the desktop app, but the process is very similar for creating cloze deletions on the mobile apps.

To create a cloze deletion, you first select the text you want to turn into recall and click on the button [...] button on the format bar or hit CTRL + SHIFT + C. This will wrap the info in a cloze code that looks like {{c1::info to be learned}}.

You can create multiple cloze deletions per card. c1 will be the first, then c2 will be the second, and so forth.

If you want to have multiple pieces of the table blanked out at the same time, edit the number following the c so they that they are the same. For example, if I wanted 'εν' and 'λογος' to be cloze deletions at the same time, I need to manually edit the number following the c so that it looks like {{c1::εν}} αρχη ην {{c1::ο λογος}}.

Final thoughts

Should creating and studying flashcard in Anki replace other learning activities that focus on comprehensible input? No. But they might be a useful parallel activity.

Oct 06, 2018

Recommended resources on Koine Greek

Below are some resources that I like, have used, or want to use as well as my thoughts on them. Hopefully, I'll update this page as I find more resources.

Basic Grammars

Mounce, William D. 2009. Basics of Biblical Greek Grammar. 3rd Ed. Grand Rapids: Zondervan.

Mounce's book is what my first Greek class used. There may be other books out there that are better, but I'm fond of this one and would recommend it as an introduction.

Betts, Gavin. 2004. Teach Yourself Ancient Greek Complete Course. 2nd Ed. Blacklick: McGraw-Hill.

I've fiddled with this book. It's not too hard, but it dumps new vocabulary on you by the train car load and thus brings a huge memorization load. It does have lots of examples from a variety of sources, some of which are longer form content – which I like. it might be better to work through something else first through. I can't really say, though, because I had already used Mounce's book before I got this one. Hopefully, I'll work all the way through it one of these days.

Black, David Alan. 2009. Learn to Read New Testament Greek. 3rd ed. Nashville: Broadman & Holman Publishers.

I have not used this book (available here), but Black is one of the pioneers in bringing the insights of modern linguistics to Koine Greek studies – at least from my perspective. I would like to look through it one day. And though I have not read it, I would recommend it based on the reputation of its author.

Greek readers

The Simonides Project has reformatted Greek and Latin readers that are out of print and no longer under copyright. They are free and look fantastic.

Reference Works

Runge, Steven E. 2010. Discourse Grammar of the Greek New Testament: A Practical Introduction for Teaching and Exegesis. Peabody: Hendrickson.

You need read this at some point.

This is my favorite Greek book because it explains how Koine Greek communicates. Unlike other grammars, it doesn't focus the nuts and bolts of verbs or the case system, rather this books explores how Greek forms larger unites of thought. As readers of Koine Greek, our ultimate goal is to understand what the author is trying to say, how it is being said, and what he/she is trying to emphasize. This book unlocks these aspect of the language.

I also love it because Runge's book sits at the crossroads of Koine Greek studies and discourse analysis. I think that it is one of many books that we are seeing and will see in the future that take insights from the field of linguistics and applies them to Koine Greek. Runge takes linguistic discourse analysis and applies it to Koine Greek in a way that is easy to understand – even if you aren't a linguist. He uses lots of examples and works through them so you can how the ideas he is talking about work out in practice.

Online Courses

Video

  • Leonard Muellner and Belisi Gillespie present a video series on Ancient Greek and use Greek: An Intensive Course as their textbook. I have not watched this course in full or used this book, but I wanted to mention it as a resource if you are looking for an online course in Ancient Greek.

University of Texas at Arlington courses

The University of Texas at Arlington has online courses (or lessons) that introduce many ancient languages including Classical and Koine Greek. I have looked at them before, though I have never worked through them all the way. I am noting them here for reference. Both the Classical and Koine Greek course provide an overview of the language and some lessons based on Greek texts.

Where Are Your Keys (WAYK)

Where Are Your Keys is one of the main methods that I will be trying out in my Greek course in Fall 2018. Below are links that I have found related to WAYK and Greek.

  • Seumas McDonald has a number of interesting resources.
    • His current site The Patrologist has a lot of interesting thoughts on Greek and Latin.
    • He also has a few intro videos on Youtube demonstrating how to use WAYK with Koine Greek (see here and here).
    • His old blog has a page about WAYK and Greek
    • Finally he has worked out a curriculum for WAYK and Greek and posted it here along with an interesting lexicon of a variety of languages that is available here
    • He also has a podcast in Ancient Greek
  • Greek and Latin online chat via Google Hangouts. I personally have never participated, but it is something that is worth knowing about.
  • Greek-English phrasebook (pdf download) translated from the German Sprechen sie Atisch
posted at 00:00  ·   ·  greek  books