dimanche 2 février 2020

La théorie des langages.

A force de bricoler et d'essayer d'améliorer des partie de logicielle que j'ai faites, je suis tombé sur Flex et Bison puis sur des référence à la théorie des langages formel. 
J'ai acheté le boucain (introduction to formal languages Révéz) mais j'ai des difficultés:
-- j'imagine des choses qui ne sont pas ce qui est présenté dans le live et donc je ne comprends pas ce que je lis.
-- je fais des digressions constamment dans différent domaines y compris en réalisant un générateur de langage utilisant une grammaire générative. 

Donc je n'avance pas et ce n'est pas seulement un problème de travail assidu. 

Des managers  qui m'ont observé depuis le  début de ma carrière on constaté un certain manque d'ambition mais on pourrait parfois émettre l'hypothèse d'une ambition démesurée qui peut aisément être prise pour de l'inconséquence, ou de la folie voir de la bêtise.

Vous avez déjà été prévenue sur les risques que je prends dans ce Blog. Ils sont démesurés par rapport à mes connaissances des sujets abordés.

Donc la théorie des langages est importante car pour moi elle chapeaute les autres théories.
Pour bien comprendre la manière de penser d'un étranger il est préférable de bien connaître sa langue, pas seulement savoir parler couramment mais connaître sa structure. Au début on est incapable de reproduire les sons aussi par ce qu'on est incapable de les reconnaître, enfin....la raison est probablement plus complexe.

Il est donc possible que les obstacles rencontrés dans la recherche fondamentale en physique soit du à l'absence d'un langage adapté. L'univers étant lui même un langage que nous essayons de traduire dans un langage compréhensible par nous. Mais quel langage chercher? La théorie des langages et les problèmes connus de traduction pourrait-ils  nous aider en essayant de s'orienter vers de nouvelle structures? Ces langages seront ils encore manipulés directement par des cerveaux humains ou prises en charge presque totalement à l'aide des ordinateurs?  On pourrait même se demander si les nouveaux langages qui vont être utilisés avec les ordinateur quantiques ne vont pas donner naissance à de nouvelles théories mathématiques. 

Mais je suis obligé d'arrêter, il faut quitter le mode recherche pour revenir en mode apprentissage: limiter les rêveries, fixer des horaires, quantifier le nombre de pages lues.

Pour moi la recherche nécessite une part d'incompétence:  vous êtes obligé à un moment donné d'utiliser  des domaines de connaissance qui vous sont étrangers . C'est pour cette raison qu'il faut des équipes pluridisciplinaires. 

Je  suis une grosse équipe de gens très incompétents à moi tout seul.... plus  Google.

Ici une tentative de développement d'un  générateur de langage (tout type  Chomsky 0-3)
Cela n'est pas complètement testé donc à vos risques et périls!

A python pgm: langage generator from grammar rules

The test suite for this programm

It works with  grammars that generates  words with infinit length.
It just cut the derivation when the length is higher than a chosen limit.
But it is not always possible to predict the length of the finnl word when rules would shrink a word like aPb-->ac .
That kind of rules gives normally a warning messages and we might not see all words having a length below the chosen limit in the final result.

Before publishing it I received comments that  the pgm wasn't working. They can say it officially now!

This pgm might be usefull to generate special kind of permutation like in the exemple given at the end of the source: generating L={P|Na(P)=Nb(P}  Vn={S,A,B} Vt={a,b}. I am still searching a way to make a test suit and generating an equivalent language with itertools.

A lot of interesting problems arise simply by trying to resolve practical problems like building a subset of the language and limiting the length of the words.

Some vague idees:
-- Structure of a language: we could compare language just by comparing the general form of the tree we explore during the derivation (abstraction of the alphabet itself).
-- how to add  shrinking rules  that will collapse parts of the language.
-- how to find a generative grammar from a preexisting language
-- how to optimize a generative grammar (by having less rules or more efficient rules during the derivation).
-- is it possible to have a canonical form for a generative grammar.

The general idee would  be to find new langages that could by their structure expand the possibilties of  existing langages.

After having tested some generative grammar with my program I had to make some modifications.
One of them was to let the program run even if I encounter situations where there are still non terminal characters but no rule can apply.
This is the case of this grammar
    F=[("S","a"),       ("S","CD"),       ("C","ACB"),       ("C","AB"),       ("AB","aBA"),      ("Aa","aA"),       ("Ba","aB"),       ("AD","Da"),       ("BD","Ea"),       ("BE","Ea"),       ("E","a")]
 generating L={ a^n² |n in 1,2,3,....}

This grammar generates (if I made no mistakes) a lot of non-derivable words. A lot more than the word in L: I had to wait hours before seeing the  first words appearing.

A way to recognize this kind of grammar would appear be more useful  to me than the Chomsky types. But I am a programmer not a linguist.
There is also perhaps a way to determine some rules that have no chance to reappear during derivation. Again a way to classify the grammar and the language.
How to transform this grammar to a more efficient one? When is it possible?
They are so many possibilities, but computers could help.

I just heard (hallucinations or electronic harassment): "He is half serious".

Making such assumptions about fondamental research is easy: I mean, every-body  should be aware of concepts limitation induced by languages.

Knowing quite nothing about quantum mechanic I can't  try to detect when a failure is due to langage limitation (wrong language) or wrong model  using the right language.

So along with  new observations   we might have to find new langages (not only based on numbers).  Of course we can't throw away number. But what else can we do with them? Do we need the  continuity given by real numbers? Is integral calculus still the only way?

So making fondamental research is some kind of reverse engineering toward a langage we don't even know. Would  people working on language and people making even simple models be useful?

After  hours of trying to study the book written by Gyorgy E.Revesez (introduction to formal langage), I haven't been able to fully understand any of the proof given for the theorems. If I say "fully" it is just because I don't want to be too hard with myself. Even understanding some theorem concepts is a problem. I don't know the reasons: perhaps I am to old or I stayed to much time without training my brain or I got unwillingly some pils that weaken intellectual abilities.

They decided that I am  a kind of  Jean Paul Farré but I don't have his talent. I just like to amuse people and myself. It is not easy with science. Science isn't an entertainment but music is.

To illustrate the kind of difficulties I have with the introduction to formal language, I'll give this example and later an attempt to resolve the problem in a way that allow somebody like me who doesn't have the training or the gifts. Some people  might find this as ridiculous as counting with your hand-fingers.


and the proof in a form of an algorithm: I'm a programmer.

if P=a^i b1 W1 b2 W2 .... bi Wi = a^iX
then X=b1 WA b2 W2 ....biWi and we have  i more b's  in X than a's.
Na(X) +i  = Nb(X)
the character at the beginning of X is X[0 ] =b1
if X1 = X[1,len(X)]   substring of X from pos 1 to the end
we have Na(X1) + i -1 = Nb(X1)
As far as we haven't encountered all the bi's they are more b's than a' in Xn

If Xn beginns with a 'a' we are sure to find a string Wi=Xn[n,n+k)] in Xn. (with n+k=<len(X))
Suppose we start with k=1
we have Wi="a" and Nb(Wi)-Na(Wi) = -1
when we reach n+k we will have    Nb(Wi)-Na(Wi) = i -1 >=0 (by incrementing or decrementing)
So we will find a k where Nb(Wi)-Na(Wi)  = 0
But Wi  must be followed by a 'b'
if Xn beggins with a 'b' we will consider this as a bi that counter balance the ai's at the beginning of P. When we have found all of them n the remaining string has surely Na=Nb

Now the pgm.
#!/bin/python
'''

'''
import re as RE
              
class TestA():
    '''
    Split a word P | Na(P)=Nb(P) and P=a^i b1 W1 b2 W2b .....bi Wi
    we want to get w1,W2....wi in a list
    and rebuild P from it to copare to the original P
    '''
    def __init__(self,P):
              
        if len(P) % 2 == 0 and len(P) != 0:
              self.P = P
        else:
            raise Exception("P must have even length and not empty")
        
        if not self.NaEqNb(P) :
            raise Exception("P must have Na = Nb ")
        
        if P[0] == "a":
            try:
                match = RE.match( "(a+)(b.*)$", P )
            except:
                raise Exception("P doesn't match '(a+)(b.*)$' -> %s" %(P,))
            self.A="a"
            self.B="b"
            
        elif P[0] == "b":
            try:
                match = RE.match( "(b+)(a.*)$", P )
            except:
                raise Exception("P doesn't match '(b+)(a.*)$' ->%s" %(P,))
            self.A="b"
            self.B="a"
            
        Alist,X=match.groups()
        ##print Alist,X
        self.wlist=[Alist]
        self.X=X
        self.i=len(Alist)
        self.extract(X)

    def NaEqNb(self,X):
            Na = X.count("a")
            Nb = X.count("b")
            if Na == Nb :
                return True
            else:
                return False
        
    def extract(self,X):
              
        n=0
        ii=0
        while n < len(X):

            if ii == self.i :
                # when we have consumed all bi's (from the ai's) 
                # the remaining string is either empty or Na =Nb
                subX=X[n:]
                if self.NaEqNb(subX) :
                    #print "last sub X,ii",subX,ii
                    self.wlist.append(subX)
                    return
                else:
                    raise Exception("Error:Na(X) != Nb(X)")
            # ii < i still have bi's to process
            if X[n] == self.A:                  
                for x in xrange(n+2,len(X)+1,2):
                    subX=X[n:x]
                         
                    if self.NaEqNb(subX) :
                         if  X[x] == self.B:  
                            self.wlist.append(subX)
                            n = x
                            break
            elif X[n] == self.B:
                n += 1
                ii += 1
                self.wlist.append(self.B)
                if n == len(X):
                    self.wlist.append("")
  
                            
              
if __name__ =="__main__":
    '''
    '''    

     sample = ["aabababbab"]
  
    ##print sample
    for P in sample:
        p="".join(P)
        t=TestA(p)
        pp="".join( t.wlist)
        if p != pp:
            raise Exception("P != PP: %s %s" %(p,pp))
        else :
            pass
            #print p,pp, t.wlist
    print "Test ik ok for all asmple"


Fuck!

By looking at  the program made from this algorithm and making several tests I could pretend that the program would never fail  for any P of L in finding this expression: P=a^ibW1bW2....bWi
But could an automatic analysis of the  program say it to me?
Could we have a special language that could be analysed  automatically?
Or could we have something like symbolic calculus for sets and tuples and lists?
It could be the first step in asking machines to take the risks for us,
and mathematicians loosing part of the control they want to keep.

Aucun commentaire:

Enregistrer un commentaire