My computer science course of Jan 2016

As I told before, in January I have been teaching a class preparing for an exam in theoretical computer science master's students.

There were little over 20 people in attendance and it was a nice experience I wouldn't mind repeating. During the evaluation process (I hope to get access to the official evaluation dataset) I got almost only high marks in all categories which I'm rather proud of.

I was a bit baffled when I asked for a list of well-known set operations and stared into blank faces but one never knows if it is shyness or missing knowledge keeping the answer from being given. I'd like it to be the former.


Since, after my first teaching job, I've yet to run out of hope, I offered to answer questions via email after the fact and a few messages laster, here is what most students seem to have problems with:

How do you find out whether a language is regular/context-free?

For both types of language we have something called the "pumping lemma" which is basically that in any infinite language with a finite set of production rules, there exists a list of words that form a repeating pattern that has a relation to a loop within the production rules.

For regular languages (say \(R\)) it is basically \(L \in R \to (\exists uvw \in L: uv^*w \subseteq L)\) with the condition of \(v\) being not empty. The idea is, that a regular language can be represented as a deterministic finite automata (DFA) that has a finite number of states while the language is infinite so that some state has to be reached more than once in the course of accepting a word. After reading \(u\) you are in some state and then you read \(v\) and are in the same state you were after reading \(u\) and after then reading \(w\) the word is accepted. The word \(v\) can be repeated many times or not at all.

CC BY-SA 3.0 / Jochen Burghardt

For context free languages (say \(CF\)) this is nearly the same, only that there is some nonterminal that can produce itself from the production rules. Repeating words appear left and right of the looping nonterminal equally often (\(v\) and \(x\)). The looping nonterminal also produces a word without looping (or every production including that nonterminal wouldn't halt) that we call \(w\). So the formula for the context-free version of the pumping lemma is basically \(L \in CF \to (\exists uvwxy \in L: (\forall n \in N : uv^nwx^ny \in L))\) with the sidecondition of not both \(vw\) and \(wx\) being empty.

In both formulas have the form \(L \in L' \to PL\) and we get the reversal \(\neg PL \to L \notin L'\) for free. So to show that a language is regular or context free it is not enough to show that the pumping lemma holds because there are languages that are not in that category and still have words that can be pumped.

So to show that a language is context free/regular one has to construct a context free grammar/(non)deterministic finite automata and if you can't find any way to pump words then the language is not in that category.

The direction of the implication-arrow (or that it isn't an equivalence) seems to have my students stumped.


And here you go with two languages that fulfill the pumping lemma for a language category and are not a language of that type:


Comments and Discussion is provided by Disqus. They are tracking site and user interaction. Please refer to their privacy policy for information about data usage and retention. If you still want to look at comments or comment yourself, enable disqus by clicking here.

Previous: Solving Tatham's Puzzles - Pattern , Next: Solving Tatham's Puzzles - Signpost (backtracking)

links

social

Theme based on notmyidea