Ok please look at BCX ThePhysicist.
Ok my code was not perfectly optmised. Still readable.
ThePhysicist code broke good practice. Never pass a var straight into printf. Not good at all as it allows a exploit.
puts should have been used instead of printf in ThePhysicist code to be safe. No exploit and faster. I was not trying to optimise it to best to at least something clean.
The processing I was refering to was the number of %s the first function had to do. A call had been removed as well. ThePhysicist had avoid all processing at all in printf. Yes this is slightly faster than my example but how readable is it. printf Internal processing is not that great. 2 lines fair simple code geting a 1 percent boast stuff produced from BCX gets alot more complex than that.
First function is only about 1% slower. I said at times I got 10 times some sections that BCX produces are down right nasty.
Code: Select all
PRINT CHR$(10),"This is the help on how to use X program",CHR$(10)
Notice CHR$(10) are constant never changing.
Output form memory
Code: Select all
printf("%s%s%s\n",chr$(10),"This is the help on how to use X program",chr$(10));
Now really this does need the optimisation to be readable let alone anything else.
Code: Select all
printf("\nThis is the help on how to use X program\n\n");
That is far more readable faster and neater.
It needs to be optimised into clean C not badly formed mess. BCX is not producing what I call clean C.
Little faster would be using puts instead of printf. Since no var work is performed.
Other problem is.
Becoming
Code: Select all
printf("%s\n","stuff");
printf("%s\n","stuff");
...
Merging into a single puts or printf when suitable can save a lot of time. And be less messy.
I only did a simple example first of the problems get verry complex.
As I said at times up to 10 times faster in places.
A printf or string operation with a lot of static chr$ and other $ ending functions calls is slow.
Same funciton with the static calls canceled out can end up as a simple puts or printf function.
Only reason why it would be harder to debug is because the BASIC code and the C code would no longer have a one to one releation ship. I have never understood way BCX demard this.
Note these faults also extend to string operations and the like. BCX has lots and lots of problems. It needs some optimisation to produce what I would call simple human readable code.
Also i++ is faster than i=i+1 on some C compliers.
Thinking that in asm on some compliers i++ gets replaced with a INC and
i=i+1 gets replaced with ADD function INC does not need to get two values and is also smaller in size than a ADD at asm..
Only if your complier optimises correctly are they the same speed. i=i+1 will be swaped for a INC if INC is faster on the processor chip or i++ will be swaped for a ADD if that is faster on the processor chip.
Learnt that one the hardway coding for avr chips. First complier I had was shoddy almost no optimization. Direct one to one conversion type from C to asm in places.
Note on some avr chips i++;i++ is faster than i=i+2 let alone i=i+1;i=i+1;. Number processing is bad on some of them yet inc works at a perfect good speed.
I guess another teacher is around some where not knowing the resaults of optmisation corrections on produce on code. And the fact that not all compliers do them. Because my teacher told me the same thing that they were the same no difference. It would have saved me hours knowing the truth of the matter. Why some code I was looking at had no i++ and all i=i+1 and other code had all i++ and no i=i+1. This was because the programmer was doing the opimisation instead of the complier.