Progress in artificial intelligence (AI), including machine learning (ML),

is having a large effect on many scientific fields at the moment, with much more to come.

Most of this effect is from machine learning or "numerical AI",

but I'll argue that the mathematical portions of "symbolic AI"

- logic and computer algebra - have a strong and novel roles to play

that are synergistic to ML. First, applications to complex biological systems

can be formalized in part through the use of dynamical graph grammars.

Graph grammars comprise rewrite rules that locally alter the structure of

a labelled graph. The operator product of two labelled graph

rewrite rules involves finding the general substitution

of the output of one into the input of the next - a form of variable binding

similar to unification in logical inference algorithms. The resulting models

blend aspects of symbolic, logical representation and numerical simulation.

Second, I have proposed an architecture of scientific modeling languages

for complex systems that requires conditionally valid translations of

one high level formal language into another, e.g. to access different

back-end simulation and analyses systems. The obvious toolkit to reach for

is modern interactive theorem verification (ITV) systems e.g. those

based on dependent type theory (historical origins include Russell and Whitehead).

ML is of course being combined with ITV, bidirectionally.

Much work remains to be done, by logical people.

Part II: Current and planned work.

Operator algebra semantics (and the relevant Fock spaces)

for scientific modeling languages based on rewrite rules,

derivation of simulation algorithms,

semantics-approximating transformations using ML;

semantics-preserving transformations using ITV ?,

relation to graph grammar pushout semantics,

verbs vs. nouns vs. expressions and logic of "eclectic types",

compositional and specialization hierarchies,

"Tchicoma" conceptual architecture.