toggle quoted message
Show quoted text
Yep. If a code block is never reused in the program, it might be best to do the code "in line". However, I am guilty of writing functions in a program that only uses that code block once. In that case, inline code does do away with the overhead of the function call. So why did I use a function? First, I was not pressing on any resource (i.e., memory) or speed constraints. Second, and perhaps more important, I knew there would be other projects "down the road" that could use the function, so off it went to a small custom library.
As you said, so much of good programming involves tradeoffs...
On Thursday, November 28, 2019, 10:10:33 AM EST, Hans Summers <hans.summers@...> wrote:
Yes all agreed... though a novice programmer will find the first version easier to read and understand and change than the second "better" version ;-)
Use of memset will probably slightly increase the code size, all things being equal, because of the function call. the memset() function will get compiled into the executable, instead of code for the for loop. BUT, it has the other big benefit that you didn't mention, that if any other part of your code can also be amended to use memset(), then you will benefit by having two different places call one piece of code (memset) rather than both for loops - which will save some code.
So that's another trick to make programs smaller... anything that can be factored out into its own function and called from multiple different places, will save space (as long as that benefit overcomes the penalty of the function call and the parameter passing - which is sometimes not the case for very simple functions).
Well written code will tend to be nicely arranged into functions collected together into appropriate modules. But if these functions are only called once by one callee... then there is the overhead of the function call and the parameter passing (if any). So sometimes good programming practice works against compact code. It's all tradeoffs...
73 Hans G0UPL
Alas, I've probably forgotten any assembler skills I used to have. I spend some of my time now trying to find ways to use standard C more efficiently. Indeed, in my Beginning C book, I spend a lot of time showing how to move from RDC (Really Dumb Code) to SDC (Sorta Dumb Code). For example, we've all seen something like:
for (i = 0; i < 100; i++)
myArray[i] = 0;
used to zero-out an array. (If myArray is global, it should already be zeroed-out. I never trust this assumption...been bitten in the butt several times on this early on.) The version above is not really RDC, but a version of SDC. A better way to write it is:
#define ELEMENTS(x) (sizeof(x) / sizeof(x))
#define MAXARRAYSIZE 100
memset(myArray, 0, ELEMENTS(myArray));
So why is this better? A good optimizing compiler will produce almost identical code for both versions. However, the second version is easier to read/change and has no magic runtime numbers in it. Anything a programmer can do to make the code easier to read and understand is almost always a good thing. Also, the ELEMENTS() macro is typeless...it will work with any C aggregate data type so it can be used to remove magic numbers from other points in the code. My goal is to get to WC (Wow Code). I've never written any of that, but have seen a couple of examples written by others.
Anyway, this is an obtuse way of saying my assembler skills have eroded to almost zero and I live in an SDC world surrounded by C. I hope to do some WC before the sand runs out of the hour glass!
Now...get back to work!
On Thursday, November 28, 2019, 3:44:45 AM EST, Hans Summers <hans.summers@...
> But I'm still appalled with the IDE as even a simple program like blink
> can suck down almost 1k of space.
Just clarification for other readers... you refer to the Arduino IDE. Arduino isn't used in any QRP Labs firmware. I always write in native C and in some cases (even in QCX) there is a bit of Assembler in there too.
I'm also not using any libraries like printf etc. or anything in the standard libraries. I just coded what I need myself, in C. It takes a bit longer than using libraries but at least you end up understand everything that is going on, and being able to optimize it to the application.
I cut my teeth on Z80 assembler, primarily... mid-80's (I was a teenager), some surplus home computers (6502-based Acorn Electron bought cheap as surplus, then Sinclair ZX81 and ZX Spectrum (both Z80) handed down from my Grandfather when he upgraded to later models). I had a lot of fun then. I wrote my own assembler/disassembler in BASIC. I was somewhat obsessed with the Mandelbrot set in those days, and while of little practical value it was highly fascinating and a good motivation to learn Assembler and tricks, to make it go faster. I recall a simple low resolution Mandelbrot could take 10 hours to compute in BASIC, but in Assembler I could run it in 7 minutes :-) More: http://hanssummers.com/mandelbrot
I think the time I spent on 80's home computers was a good foundation for later work not just these days in embedded systems (an ATmega328 has quite similar capabilities to those old machines); but also in the intervening years professionally. I often lamented chronic tendencies by younger software developers to assume a lot of "infinites". Specifically, infinite CPU speed, infinite network bandwidth, infinite disk space, infinite memory availability, and infinitesimal network latency. A lot of the time these infinite assumptions actually did approximate reality but, I was working in the area of financial exotic derivatives pricing and risk management, a lot of it is computationally very heavy indeed; so the approximations frequently broke down. I was known to frequently whip colleagues and team members (trying to whip kindly) to write code more efficiently :-)
73 Hans G0UPL