patternMinor
Proving that a word is *not* generated by a context-free grammar
Viewed 0 times
provinggeneratedfreewordthatgrammarcontextnot
Problem
I saw the answer in one of the solutions and I cannot figure out how they got the answer. The question is asked if the word is in the language or not for CNF...
How did they get the answer so that ab is in the language and aaaa is not in the language here ?
edit:
is
also
How did they get the answer so that ab is in the language and aaaa is not in the language here ?
S -> aSb |ab
ab Yes
aaaa No
aabb Yesedit:
is
aaaa No because it has b in aSb and ab is yes because obviously ab is there in aSb |abalso
aabb is in the language because of aSb cause you would add same amount of a's and b's to both sidesSolution
Let's assume you want to show that some word $w$ of length $n$ is not in the language generated by some grammar context-free $G$.
You can solve the problem algorithmically. There are two approaches:
-
Enumerate all strings in $L(G)$ of length at most $n$. In order to do that,
This works since every rule application adds one terminal symbol in CNF (there are no $\varepsilon$-rules). If your original grammar has this property (as does the one in your question), you can skip the transformation in CNF.
This is tractable for small $n$ and/or grammars with few words per length. As soon as $L(G)$ grows fast with $n$, it breaks down.
-
Parse $w$, for instance with the CYK algorithm. This runs in time $\Theta(n^3)$ and feasible for most grammars and inputs of reasonable size.
Clearly, the latter is the better option in an algorithmic mindset, but the former might be prudent if you have to make a quick decision given a short word in an exam-like situation.
You can solve the problem algorithmically. There are two approaches:
-
Enumerate all strings in $L(G)$ of length at most $n$. In order to do that,
- transform the grammar in Chomsky normal form and
- enumerate all derivations with at most $n$ rule applications.
This works since every rule application adds one terminal symbol in CNF (there are no $\varepsilon$-rules). If your original grammar has this property (as does the one in your question), you can skip the transformation in CNF.
This is tractable for small $n$ and/or grammars with few words per length. As soon as $L(G)$ grows fast with $n$, it breaks down.
-
Parse $w$, for instance with the CYK algorithm. This runs in time $\Theta(n^3)$ and feasible for most grammars and inputs of reasonable size.
Clearly, the latter is the better option in an algorithmic mindset, but the former might be prudent if you have to make a quick decision given a short word in an exam-like situation.
Context
StackExchange Computer Science Q#23609, answer score: 8
Revisions (0)
No revisions yet.