The WinApi interface, at BBox 1.6, used to be:
Code: Select all
LOGFONTW* = RECORD [untagged]
...
lfItalic*: SHORTCHAR;
...
and one sets this field with the code
Code: Select all
VAR
font : WinApi.LOGFONTW;
style : SET;
...
IF Fonts.italic IN style THEN font.lfItalic := 1X ELSE font.lfItalic := 0X END;
In January we changed
lfItalic to BOOLEAN, and the application code needs to change to
Code: Select all
font.lfItalic := Fonts.italic IN style;
Is this a good change?
1 - It certainly looks simpler and more direct.
2 - It breaks back compatibility.
My concern is more serious than either of these points.
It only looks simpler if we look only at the BBox WinApi interface.
If we look at the Microsoft documentation it tells us that
lfItalic should be 1 byte, and we should set that byte to 0 (false) or 1 (true) for no italic, italic respectively.
I now look at the new BBox code and it is not obvious to me that:
1 - BOOLEAN is 1 byte
2 - FALSE is coded as 0
3 - TRUE is coded as 1.
With the old code I could see I was doing what Microsoft asked, with the new code I can't.
Even if I think the 3
facts above are true, I might want to check them to be on the safe side. This means they must be easy to find in the BBox documentation. I can't find them at all.
Another issue is when
lfItalic is set by Windows. The value 0 should be interpreted as FALSE. Any other value (eg 36) should be interpreted as TRUE. How do we know that BBox will react correctly when getting a BOOLEAN coding that differs from its usual codings for TRUE & FALSE?
Assuming TRUE is coded as 1, how does BlackBox interpret:
"36", ~"36", "36" & "1", etc, where by "36" I mean a variable of type BOOLEAN with the same bit-pattern as a BYTE of value 36.
Robert