If this is your first visit, be sure to check out the FAQ by clicking the link above. You may have to register before you can post: click the register link above to proceed. To start viewing messages, select the forum that you want to visit from the selection below. |
|
|
Thread Tools | Rate Thread | Display Modes |
#91
|
|||
|
|||
Gibson's Meltdown/Spectre Tester
On 24/01/2018 21:14, Daniel James wrote:
In article , Pjp wrote: In fact first 100,000 soucre code lines of code I wrote for Windows was in straight Pascal without using any libraries, instead making all calls to Win API's directly. That was very satisfying. The same sort of satisfaction that you might derive from beating yourself over the head repeatedly with a brick? Libraries make life easier. It may be more fun if you write your own libraries, but eschewing them is just masochism. Sometimes. Other times the documentation is ambiguous and incomplete and you waste more time trying to work out how to the weirdos that wrote the library intended it to be used than it would take to write your own. First time I tried to use a RTOS was like that. Wrote my own in the end as we didn't need anything very complicated. -- Brian Gregory (in England). |
Ads |
#92
|
|||
|
|||
Gibson's Meltdown/Spectre Tester
"Daniel James" wrote
| In fact first 100,000 soucre code lines of code I wrote for Windows | was in straight Pascal without using any libraries, instead making | all calls to Win API's directly. That was very satisfying. | | The same sort of satisfaction that you might derive from beating yourself | over the head repeatedly with a brick? | More like the satisfaction of driving a standard shift. | Libraries make life easier. It may be more fun if you write your own | libraries, but eschewing them is just masochism. | The API is also libraries. It sounds like you mean sandboxed wrapper libraries, so that everything is part of an object model. Don't they all have their place? Java and .Net are rare on the desktop for good reason: They're slow, bloated and superfluous. .Net never really did find its place, except backend serverside. It was designed for "web services", which never happened. Now it's being repurposed as a way to write Metro/RT/Universal apps. (Which are on at least their 3rd re-marketing, but with WinPhones dead they're completely irrelevant.) I don't understand the attitude that Win32 API is old and outdated. The new is just wrappers. It's still the API underneath. That's what an OS is. Things like Java and .Net were meant to represent a safe layer on top of that. Not a better way. A specific tool for specific purposes. |
#93
|
|||
|
|||
Gibson's Meltdown/Spectre Tester
In article , Brian
Gregory wrote: Libraries make life easier. It may be more fun if you write your own libraries, but eschewing them is just masochism. Sometimes. Other times the documentation is ambiguous and incomplete and you waste more time trying to work out how to the weirdos that wrote the library intended it to be used than it would take to write your own. Libraries, generally, make life easier. This is true of any libraries you may write yourself as well as of third-party libraries. Of course, the decision to include a particular third-party library in your codebase is one that should only be taken after considering the quality of the code and of the documentation, the availability of sourcecode, the availability of a test suite, the responsiveness of the support organization of the company that supplies the library ... and even the cost of the library. Some libraries are so bad as to be pure Bad News, and should be avoided. The same applies to co-workers! First time I tried to use a RTOS was like that. Wrote my own in the end as we didn't need anything very complicated. Yes, the same applies to OSes, too. -- Cheers, Daniel. |
#94
|
|||
|
|||
Gibson's Meltdown/Spectre Tester
In article , Mayayana wrote:
| Libraries make life easier. It may be more fun if you write your own | libraries, but eschewing them is just masochism. | The API is also libraries. It sounds like you mean sandboxed wrapper libraries No, I mean any cohesive collection of (home-grown of third-party) code that you can suck into your project to save you work (as long as it does save you work Brian Gregory is quite right to point out that not all libraries are competently put together). Java and .Net are rare on the desktop for good reason: They're slow, bloated and superfluous. Java and .Net are not all that rare on the desktop -- nor are they necessarily slow or bloated. I'll agree on superfluous, though, nobody needs them. However, Microsoft do make some code generation tools for .Net that make, e.g., GUI design for Windows very easy to do. Those tools didn't have to target .Net, but Microsoft chose to write them so that they did. Those tools are a powerful incentive to use .Net languages for desktop projects if you need a GUI done quickly and easily and don't have the skills to code one by hand. .Net never really did find its place, except backend serverside. It was designed for "web services", which never happened. Net was designed to replace Java when Sun sued Microsoft for adding Windows-only extensions to their Java implementation. The .Net runtime environment was a little more versatile than the JRE, but not much, but otherwise .Net brought nothing new to the party. It's been developed further, since ... Now it's being repurposed as a way to write Metro/RT/Universal apps. (Which are on at least their 3rd re-marketing, but with WinPhones dead they're completely irrelevant.) Java and .Net offer the promise of "Write Once Run Everywhere" -- the compiled application is distributed as a platform-independent binary that is interpreted or JIT compiled by a platform-specific runtime on the target platform. It's a nice idea, but not without problems. I can see why it'd be attractive to Microsoft when they're trying to target 32-bit and 64-bit x86 architectures as well as various different ARM SoCs (and other things too) in different mobile platforms. You don't need to use .Net if you want to write "Universal" Windows Store apps, of course, C++ is still available. I don't understand the attitude that Win32 API is old and outdated. The new is just wrappers. It's still the API underneath. Yes, of course. The wrappers make it easier to call the underlying APIs from (say) .Net languages ... because the .Net environment doesn't have a direct way to call native C interfaces. The wrappers were originally supposed to provide some sandboxing so that Win32 APIs can't be called in dangerous ways, but I suspect that much of that has been sacrificed in the name of performance, now. Things like Java and .Net were meant to represent a safe layer on top of that. Not a better way. A specific tool for specific purposes. A safer way for inexperienced programmers. -- Cheers, Daniel. |
#95
|
|||
|
|||
Gibson's Meltdown/Spectre Tester
On 22/01/2018 00:05, Paul wrote:
Next step, is "as.exe" the GNU assembler, takes %temp%\ccPlc7wC.sÂ* and makes %temp%\ccc6dCLM.o (snipped most stuff) So I learned something. Among the many steps that the Gnu C compiler takes is to generate assembler source, then generate the binary from that. That doesn't change that the binary that it produces from its last step is exactly the code that gets run. The compiler (CC1) gets to decide exactly what opcode is in that binary. There is NO overhead. The odd thing BTW is that I run gnu compilers all day at work, and having spent many bored hours staring at top I've never see as in the process list. Perhaps this is a Windows specific thing? Andy |
#96
|
|||
|
|||
Gibson's Meltdown/Spectre Tester
On 23/01/2018 13:31, Wolf K wrote:
I think it's because most computer languages do. Eg, the general pattern is: Store ThisData ThisLocation Fetch ThatData ThatLocation Etc. Funny that. I though most of them said "A=B+C" - with the destination on the left. I think the first 5 assemblers I learned were that way around. The 68000 came as a nasty shock. Then there's Forth... That just might be a+b=c... And LISP.... If you want weird I'll raise you APL. (Which does go right-to-left) Andy |
#97
|
|||
|
|||
Gibson's Meltdown/Spectre Tester
Vir Campestris writes:
On 22/01/2018 00:05, Paul wrote: Next step, is "as.exe" the GNU assembler, takes %temp%\ccPlc7wC.sÂ* and makes %temp%\ccc6dCLM.o (snipped most stuff) So I learned something. Among the many steps that the Gnu C compiler takes is to generate assembler source, then generate the binary from that. That doesn't change that the binary that it produces from its last step is exactly the code that gets run. The compiler (CC1) gets to decide exactly what opcode is in that binary. There is NO overhead. The odd thing BTW is that I run gnu compilers all day at work, and having spent many bored hours staring at top I've never see as in the process list. Perhaps this is a Windows specific thing? It’s the same with GCC on Unix. Almost all the work happens in cc1 (or cc1plus). The assembler isn’t actually doing much and takes very little time. Clang, on the other hand, has an integrated assembler. -- https://www.greenend.org.uk/rjk/ |
#98
|
|||
|
|||
Gibson's Meltdown/Spectre Tester
"Daniel James" wrote
| .Net never really did find its place, except backend | serverside. It was designed for "web services", which | never happened. | | Net was designed to replace Java when Sun sued Microsoft for adding | Windows-only extensions to their Java implementation. It did serve as a Java competitor, but the intitial release was surprisingly similar to the Metro/RT/ Universal scam: "Web services are the future. We'd like you kids to get out of the system and write some trinkets to back up our Passport and Hailstorm fiascos." (Remember those?) ---------------------------------------- http://web.archive.org/web/201011121...eliverspr.mspx "Visual Studio.NET for building, integrating and running next-generation, XML-based Web services. Visual Studio.NET, the latest version of the world's most widely used development tools, provides native support for drag and drop development of Web services. Together, these two products provide developers with a high productivity, multilanguage environment to rapidly build, deliver and integrate Web services on the Microsoft .NET Platform." ------------------------------------------------ | Java and .Net offer the promise of "Write Once Run Everywhere" Yes. The promise. The current version of .Net won't even run on XP or earlier, much less "everywhere". That was never really their intention. An it's never really worked. |
#99
|
|||
|
|||
Gibson's Meltdown/Spectre Tester
In message , Vir Campestris
writes: On 22/01/2018 00:05, Paul wrote: Next step, is "as.exe" the GNU assembler, takes %temp%\ccPlc7wC.s* and makes %temp%\ccc6dCLM.o (snipped most stuff) So I learned something. Among the many steps that the Gnu C compiler takes is to generate assembler source, then generate the binary from that. That doesn't change that the binary that it produces from its last step is exactly the code that gets run. The compiler (CC1) gets to decide exactly what opcode is in that binary. There is NO overhead. Well ... it depends how efficiently it produces source code. For some things, such as keeping track of pipelining\parallel code(s), it might actually do better than most hand-coders; for other things, it might not. I guess the ideal would be for it to generate the source code, and then experts optimise that - but I suspect the source code it produces isn't that easy to follow. The odd thing BTW is that I run gnu compilers all day at work, and having spent many bored hours staring at top I've never see as in the process list. Perhaps this is a Windows specific thing? Andy (Is "as" an opcode?) -- J. P. Gilliver. UMRA: 1960/1985 MB++G()AL-IS-Ch++(p)Ar@T+H+Sh0!:`)DNAf Just as many people feel Christmas hasn't begun until they've heard the carols at King's, or that the election campaign hasn't begun until some politician lambasts the BBC ... - Eddie Mair, Radio Times 2013/11/16-22 |
#100
|
|||
|
|||
Gibson's Meltdown/Spectre Tester
On 26/01/2018 04:32, J. P. Gilliver (John) wrote:
Well ... it depends how efficiently it produces source code. For some things, such as keeping track of pipelining\parallel code(s), it might actually do better than most hand-coders; for other things, it might not. I guess the ideal would be for it to generate the source code, and then experts optimise that - but I suspect the source code it produces isn't that easy to follow. I struggle to follow optimised code with a debugger. I'd hate to try to optimise it further. Some of the things compilers do are _weird_. Andy |
#101
|
|||
|
|||
Gibson's Meltdown/Spectre Tester
Vir Campestris wrote:
On 26/01/2018 04:32, J. P. Gilliver (John) wrote: Well ... it depends how efficiently it produces source code. For some things, such as keeping track of pipelining\parallel code(s), it might actually do better than most hand-coders; for other things, it might not. I guess the ideal would be for it to generate the source code, and then experts optimise that - but I suspect the source code it produces isn't that easy to follow. I struggle to follow optimised code with a debugger. I'd hate to try to optimise it further. Some of the things compilers do are _weird_. Andy The compiler has a metric ton of command-line options that change its behavior. So when you see weirdness in object code, it's traceable to the weirdness in the individual writing the makefile :-) ******* One of the levels of optimization (probably -o3) is based on "visible change". For example, if I put a single printf in a program like this printf("Hello world\n"); that causes a visible side-effect. The person running my code, expects to see Hello world printed on the screen. The compiler writer cannot remove that statement, because of its visible side effect. OK, let's try another. What does this construct do ? Well, as the developer, my intent was to "make the room warm" :-) This is an example of a busy loop. In the old days, we used these for timing purposes. (Our floppy driver at work, used to do stuff like this.) Maybe I never print out the "i" or "j" variable, and I just want the processor to waste a few microseconds, before the next chunk of code. for (i=0, i=100, i++) { j=j+1; } Since the loop has no printf(), no fwrite(), from the compiler writer perspective, those lines can be removed. Let's try another program. Here, my intent is to put the operating system under memory stress. int i = 0; void *m; while ( (m = malloc(1024*1024)) != NULL ) { i++; } Now, from the compiler writer perspective, I never use the malloc'ed memory for anything, so we can remove that object code entirely. Now, let's add a single line to the program. The printf. Once I do this, I can be certain that using "-o3" optimization, my code won't get removed. int i = 0; void *m; while ( (m = malloc(1024*1024)) != NULL ) { i++; } printf("%d megabytes of memory allocated\n", i); Now, the entire stanza has a purpose, so it cannot be optimized away. The value of "i" depends on the outcome of the preceding statements. And to prevent "lazy evaluation" from defeating my mallocs at the OS level, I write to the RAM with memset, to tell the OS "hey, I really really meant malloc". You can see how my program is beginning to get cluttered with things, that twenty years ago, we didn't have to worry about. Twenty years ago, malloc evaluated right away, and nobody checked whether you used the resources entered in your program code. int i = 0; void *m; while ( (m = malloc(1024*1024)) != NULL ) { memset(m,0,1024*1024); i++; } printf("%d megabytes of memory allocated\n", i); And there are other crazy ideas, such as bizarre space/time tradeoffs. For example, I could have a 30KB program with a FOR loop that counts from i=1 to i=2. If you use an "extreme enough" unroll option, the size of the program can increase to 60KB, and two instances of code injected, one where "i" was replaced with "1", then a chunk of code where "i" was replaced with "2". I actually ran into this a few years ago, while hex editing an EXE to remove a disk size check. And the program insisted on still doing the disk size check. But after a while, I began to realize there were two (almost) identical chunks of code in the object, and a huge chunk had been unrolled, just to save the time incrementing a *single* counter variable. No human doing hand optimization, would ever do that. Duplicate two 30KB chunks of code, just to save incrementing a single counter by 1 ? Humans do these though. for (i=0, i=4, i++) { k[i] = sqrt(i); } Now, I could unroll that to something like this, which produces the same kind of side effect. k[1] = 1; k[2] = sqrt(2); k[3] = sqrt(3); k[4] = 2; And since the unrolling is "local" and I can see all the assembler lines doing that in a single window of my disassembler, perhaps that would be considered fair game for an unrolling directive. That's a space/time tradeoff, where I eliminate a counter variable and a conditional branch, for the improvement in "straight-line speed". But I better have a printf later (of course)... :-) printf("%d %d %d %d\n", k[1], k[2], k[3], k[4]); Paul |
#102
|
|||
|
|||
Gibson's Meltdown/Spectre Tester
In article , Paul wrote:
OK, let's try another. What does this construct do ? Well, as the developer, my intent was to "make the room warm" :-) This is an example of a busy loop. In the old days, we used these for timing purposes. (Our floppy driver at work, used to do stuff like this.) Maybe I never print out the "i" or "j" variable, and I just want the processor to waste a few microseconds, before the next chunk of code. for (i=0, i=100, i++) { j=j+1; } Since the loop has no printf(), no fwrite(), from the compiler writer perspective, those lines can be removed. More strictly: Those lines can be replaced by: i = 101; j - j+101; ... though if the values of i and j are never used the optimizer may remove those lines as well. [Why 101 not 100? Well, i is 100 in the last iteration of the loop, then is incremented once more before the test that exits the loop, so its final value will be 101. j is incremented by 101 because the loop executes 101 times for values of i from 0 to 100 inclusive.] -- Cheers, Daniel. |
Thread Tools | |
Display Modes | Rate This Thread |
|
|