Pages:
1
2 |
MadHatter
International Hazard
Posts: 1338
Registered: 9-7-2004
Location: Maine
Member Is Offline
Mood: Enjoying retirement
|
|
MicroShit's Newest Employees
I heard this on the radio yesterday while at work and found it online today:
Turn in Your Boss, Get Paid
/Posted by Kachina Shaw Dec 3, 2009 2:45:02 PM/
Microsoft is hoping to recruit more end users to join its ongoing battle
against software piracy. As part of this year’s awareness push,
yesterday was named *Consumer Action Day
by the company, with a major element of the message focused on malware
fears and an invitation for some users to consider putting their own
bosses in hot water.
When most folks think of software piracy, they most likely think of
saving big bucks (and how the vendor can afford it, so it's no big deal,
anyway). Microsoft wants more people to think of the dangerous malware
that might be lurking in that counterfeit software, and the grief it'll
cause.
"Consumers who are duped by fraudulent software encounter viruses, lose
personal information, risk having their identities stolen, and waste
valuable time and money," says David Finn, Microsoft's associate general
counsel for Worldwide Anti-Piracy and Anti-Counterfeiting. And Microsoft
knows this because it's generally the people who have struggled through
one or more of these consequences after using pirated software who
squeeze some kind of satisfaction out of the whole sorry experience by
turning in the counterfeiters; the last *two years have seen consumer
reports double, Microsoft says
If you are partial to turning in counterfeiters, and you live in London,
you could collect a reward from the Business Software Alliance (of which
Microsoft is a member) for ratting out your boss for using counterfeit
software in the workplace. Rewards can be as high as £20,000, or over
$33,000. *V3.co.uk says BSA tried similar offers in Manchester and
Glasgow
and expects the whistleblowing to reduce annual losses of £149 million
(US$247 million) from piracy among London businesses alone.
A BSA spokeswoman surmised that employees who've been laid off or seen
their paychecks reduced in recent months just might be in the mood to
"nail" the boss.
The London reward offer is valid through Dec. 31.
I don't buy that "concern for malware" bullshit ! It's pure greed ! Some pictures:
First the BSA at feeding time.
Second, their eager new "staff".
From opening of NCIS New Orleans - It goes a BOOM ! BOOM ! BOOM ! MUHAHAHAHAHAHAHA !
|
|
hissingnoise
International Hazard
Posts: 3940
Registered: 26-12-2002
Member Is Offline
Mood: Pulverulescent!
|
|
If it makes the web safer for me I'm all for it. . .
|
|
bbartlog
International Hazard
Posts: 1139
Registered: 27-8-2009
Location: Unmoored in time
Member Is Offline
Mood: No Mood
|
|
How would it make the web safer for you? The link between piracy and malware seems tenuous. I could just as well argue that more money for Microsoft
means more Microsoft marketing and market share which means more security problems and viruses thanks to their crappy OS.
|
|
hissingnoise
International Hazard
Posts: 3940
Registered: 26-12-2002
Member Is Offline
Mood: Pulverulescent!
|
|
Let me rephrase it then---I'm all for anything that makes the web safer. . .OK!
|
|
hinz
Hazard to Others
Posts: 200
Registered: 29-10-2004
Member Is Offline
Mood: No Mood
|
|
This is the typical M$ bullshit.
It´s in their own hand, they have the source code, they can make their Vista and Win7 crap safer. But closed source mainstream Windoze won´t beat
any open source UNIX-based system in safety and stability.
Simply because alot more people look through the code in open source software and bugfix it. And because most dumb M$ users don´t even set a root
passwort, which is default in all Linuxes I know. So the bad virus coder can can patch system files and execute some mean Ring0 code on most Windoze
PCs without even searching for security flaws like buffer overflows etc. The only thing he has to do is to get the user to execute his file. And
because all these reasons and because there more Win PCs around, it´s much more fun for them to code for Windoze.
And for most safety concerns there´s the simple solution not to connect a computer with a M$ OS and valuable material on its HD to the internet.
And most crackers do not add any malware to their programs. Simply because they crack them for fun, fame and to make software available to people who
don´t have the money to buy it and who won´t want to work with crippled software. I think people should buy software if they use it frequently or
commercially, but why should someone be forced to use crippled software if there´s a fully functional version available.
I only got crack with malware in it once, and I´ve used alot of reverse engineered software.
[Edited on 5-12-2009 by hinz]
|
|
len1
National Hazard
Posts: 595
Registered: 1-3-2007
Member Is Offline
Mood: NZ 1 (goal) - Italy 1 (dive)
|
|
Yeah Im afraid microsoft count on the square customer who will think he is 'thinking what is right', whereas in fact he's doing what they want. We've
had a good demo here.
An actual real example I have dealt with is applicable to this very subject of practical chemistry. You used to be able to write simple C or even
basic code to control apparatus. You cant anymore. Microsoft has made port read and write a priveleged instruction.
If youre against tight control on port access youre against a secure robust OS they say. Excpet you still need to write to ports else the computer is
a lame duck. But now by interfacing to ports through their DDK, which you have to spend money to get and which slows everything down, they have
turned the simple
inp a, 888
into several pages worth of system and device driver code in the name of security. Think about this for a minute
a lot of hard work for you + proprietary microsoft code + money to microsoft = inp a, 888 = security
I managed to work around reverse engineering their code and patching with machine code because we didnt have money for their DDK. At the end of a
weeks work I could both read and write ports !in real time! as before, so how has this made the OS more robust??.
I think they have truly earned the title of this post.
|
|
Sedit
International Hazard
Posts: 1939
Registered: 23-11-2008
Member Is Offline
Mood: Manic Expressive
|
|
Quote: | You used to be able to write simple C or even basic code to control apparatus. You cant anymore. Microsoft has made port read and write a priveleged
instruction.
|
Come again? Its been a while since I hacked away at C++ and what not but are you telling me that inport and outport functions are no longer accessible
to the programer?
Knowledge is useless to useless people...
"I see a lot of patterns in our behavior as a nation that parallel a lot of other historical processes. The fall of Rome, the fall of Germany — the
fall of the ruling country, the people who think they can do whatever they want without anybody else's consent. I've seen this story
before."~Maynard James Keenan
|
|
12AX7
Post Harlot
Posts: 4803
Registered: 8-3-2005
Location: oscillating
Member Is Offline
Mood: informative
|
|
I am quite readily able to access ports in XP. You seem to have missed something...
http://logix4u.net/Legacy_Ports/Parallel_Port/Inpout32.dll_f...
Not really practical to use a PC to control ports anymore though. Certainly not from a multitasking system. Better to use a USB interfaced
microcontroller -- you still get to write all the code you wanted, and it runs with predictable timing. Try to play a tone through your parallel port
and all you'll get is garble.
As for linux... I laugh. Guru users, sitting high in their ivory rollaround chairs, sneer at average joes toiling in the fields. I'm amazed how a
supposed "guru" is so blinded as to make such a glaring omission. Even the most simplified distributions are essentially impossible for a
non-technical user to install and operate, so anyone in favor of widespread linux adoption clearly has his head up his ass.
So anyways, what were we talking about? Piracy? They say reports have doubled in the last two years... I wonder how often illicit software trading
doubles: A year? Six months? There's nothing that says, just because your battle is growing, doesn't mean you're losing more and more each day.
Tim
|
|
len1
National Hazard
Posts: 595
Registered: 1-3-2007
Member Is Offline
Mood: NZ 1 (goal) - Italy 1 (dive)
|
|
Yes thats right, on any Windows above 98. You can still put statements such as
_outp(a,b);
in your code, and it will even compile. But when you try running it, the processor will trap the instruction and generate a General Protection Fault.
The instruction is only allowed in code running in ring-0, which in Windows means you have to write a device driver, register it in the windows
registry so it can run with all the rest of protected windows code. Drivers use a propritary microsoft format and can only be compiled with the
appropriate DDK.
But what are they trying to do, and what is this so called security? Against developers writing crappy code thats going to access ports it shouldnt
and crash the machine? In that case by this solution they must mean that any developer who pays them money and works with their DDK is too good to
crash the machine.
@ax Why dont you read posts thoroughly before replying. What you have linked to is third party software that registers a device driver to which you
interface to get port access. Thats the reason you are able to do it, because youre using work someone else has done. The penalty you pay is that
access is slow - which is why you reckon its not really practical. I am able to access ports using my own code at the maxiumum rate of which the
computer is capable. It also means if I played tones - they arent garbled. And you save on the cost or design time of a microcontroller and usb
driver if you dont need one.
[Edited on 5-12-2009 by len1]
|
|
Sedit
International Hazard
Posts: 1939
Registered: 23-11-2008
Member Is Offline
Mood: Manic Expressive
|
|
Thats floors me.... Len.
I don't know what to think. Assuming this applies to 32bit protected mode as well(like I said its been a while for me) but either way that is crazy since there is not many options to "crash" a computer
using the ports. It makes no sence that they will block the outp functions yet still allow ASM to be encoded directly into the source code.(Do they
allow that still?).
Shit I was just about to get back into some simple programing using VB but so much has changed even in that piece of crap that im not sure I can do it
anymore. All I wanted to do was make some simple solubility charts to save my data into and have a quick reference but its appearing to me much more
of an issue then it should be.
Knowledge is useless to useless people...
"I see a lot of patterns in our behavior as a nation that parallel a lot of other historical processes. The fall of Rome, the fall of Germany — the
fall of the ruling country, the people who think they can do whatever they want without anybody else's consent. I've seen this story
before."~Maynard James Keenan
|
|
not_important
International Hazard
Posts: 3873
Registered: 21-7-2006
Member Is Offline
Mood: No Mood
|
|
It's not Microsoft as such block access to the I/O ports, it's the CPU itself. And that's not new, multitasking OSes since at least the 1970s have run
the kernal in privileged mode and user processes in user mode; depending on the processor family this means blocking various instruction such as inp
and outp, and potentially restricting access to memory space through memory management. I know this was true as back then I wrote device drivers for
DEC PDP-11s, and studied IBM 360/370 architecture and multitasking OSes.
This was done so as to prevent one process from taking down the entire computer, with all the other tasks running on it. Developers at IBM used it to
test out new releases of OSes, running them on a virtual machine (in user mode) with the actual OS catching the traps from executing privileged
instructions and emulating them so the OS undergoing testing 'thought' it was setting bit in restricted registers.
Memory mapping was used to restrict access to physical memory for the same reason. A wild point could attempt to access any memory location, the MMU
would prevent actual physical locations outside of the space assigned to your process from being accessed, your process might be running on a full
virtual memory space meaning you could access all of memory as you saw it but that was virtual memory and part of it might reside on disk. On DEC
PDP-11s the 16 bit architecture meant you only had 65K of memory space, or 65k of code and 65k of data space, while the MMU mapped that into a 22bit
address range. Your program ran as a process with memory space mapped into physical core addresses in 4K chunks. You could access the address space of
memory used for I/O, the top 4K, but you couldn't actually access the real I/O page because of the potential to trash other processes I/O. The kernal
and device drivers were the code doing that, Microsoft didn't even exist yet when those OSes were written.
And that's considered good multitasking OS design. You don't want user processes munging with I/O operations or being able to poke in other processes
memory. Intel was there before Microsoft, with the virtual 8086 mode on the 80286 giving a 16 bit address range mapped into the full 286 address space
and a few privileged instructions. Later processors expanded that and plugged some CPU design holes. Microsoft finally got around to properly using
the hardware to restrict program access to computer functionality a bit, but did a poor job in that there are still indirect ways to modify OS files.
What you are complaining about is like buying a Morris J4 and expecting it to act like a Porsche 911SC. Windows is a low performance overweight OS
attempting to be useful to everyone, novice through expert, and ending being 'adequate' for the majority of applications, filled with obscure
configuration options, and carrying a load of legency support.
As for *NIX Quote: | Even the most simplified distributions are essentially impossible for a non-technical user to install and operate, | my sister, who had a difficult time comprehending fractions, managed to install Ubuntu onto a used PC in less time than Windows
took, and without getting frustrated. This is a near eschatological event, in the past she has become frustrated with programming answering machines
and cash registers.
As for the root of this thread, there's a simple solution - buy it, or don't use it. For most applications there are low cost to free alternatives; in
some cases the FOSS version is better than the commercial ones - see the U.S. Veterans Administration's VistA package, considered considerably better
than the commercial products my the majority of end users.
|
|
len1
National Hazard
Posts: 595
Registered: 1-3-2007
Member Is Offline
Mood: NZ 1 (goal) - Italy 1 (dive)
|
|
Quote: Originally posted by not_important | It's not Microsoft as such block access to the I/O ports, it's the CPU itself. And that's not new, multitasking OSes since at least the 1970s have run
the kernal in privileged mode and user processes in user mode; depending on the processor family this means blocking various instruction such as inp
and outp, and potentially restricting access to memory space through memory management. I know this was true as back then I wrote device drivers for
DEC PDP-11s, and studied IBM 360/370 architecture and multitasking OSes.
This was done so as to prevent one process from taking down the entire computer, with all the other tasks running on it. Developers at IBM used it to
test out new releases of OSes, running them on a virtual machine (in user mode) with the actual OS catching the traps from executing privileged
instructions and emulating them so the OS undergoing testing 'thought' it was setting bit in restricted registers.
|
No, thats misleading. Any intel compatible processor will allow you to execute port access provided youre running win98 or win 95. So clearly
restriction of port access is not inherent in the processor per se.
More correctly, its a priveleged instruction in the processor, and in win98 and below Windows allowed user code access to that priveledge. In 2000
and up Microsoft CHOSE not to have these instructions permitted in the task segment.
You say its done to prevent one process crashing the entire machine. So Msoft hasent remedied that. I am tired of pointing out the obvious fact
that having paid Msoft for the DDK youre free to access ports all be it in a stupendously complicated fashion and crash the machine if thats your aim
or if youre a crap programmer
[Edited on 5-12-2009 by len1]
|
|
not_important
International Hazard
Posts: 3873
Registered: 21-7-2006
Member Is Offline
Mood: No Mood
|
|
Quote: | Any intel compatible processor will allow you to execute port access provided youre running win98 or win 95. |
This is because versions of Windows pre NT/XP run everything in supervisor mode, or ring 0 (the ring concept goes back to the Bell Labs/GE Multics OS
of the mid 1960s; rings turned out to be not too useful and were collapsed into supervisor/user modes for most processor architectures of the 1960s to
early vintage). Yes, it's an attribute of the processor, but multitasking OSes had been using S/U modes for many years before Windows 3 came along;
Microsoft decided to ignore that for several reasons (I used to have lunch with a couple of people working on Windows kernel stuff, we had a lot of
arguments).
Yes, Windows allows you to bless you app so it can do useful hardware access, and that allows you to do all sorts of bad stuff. But Windows allows you
to do many things most multitasking OSes do not, or only allow supervisor mode code to do, because those OSes are concerned with robustness and
security. Real time OSes generally allow direct hardware access, or make it simple and straightforward to do so. They run everything in supervisor
mode, or make it simple to tag tasks as running in supervisor mode while non-control tasks run in user mode. But real time OSes are generally used in
embedded systems, where random applications are not being loaded into the system.
So, yes, Microsoft decided to build a car with 5 wheels, only 2 forward gears, and that could only turn right. But that doesn't mean that is a proper
design for a vehicle, no matter how much PR you throw at it.
Microsoft was a latecomer to computer OSes, and had problems following common experience-gained wisdom in the field. This goes back to its roots in
CP/M-derived QDOS/86DOS; later versions of DOS continued to support the QDOS-CP/M system calls, rather than factoring them out into a separate app
that would be run to support the crufty old syscalls. Doing so would mean instead of "C:>stupidoldapp" you'd need to type "C:>cpm stupidoldapp".
This likely would have encouraged software companies to clean up their old CP/M programs that had hastily been ported to 8086/DOS systems, and were a
source of frustration to many a user. (in 1979 I worked a couple of doors away from Seattle Computer Products, where Tim Paterson was hacking away on
QDOS)
|
|
quicksilver
International Hazard
Posts: 1820
Registered: 7-9-2005
Location: Inches from the keyboard....
Member Is Offline
Mood: ~-=SWINGS=-~
|
|
A LONG time back there was the CSA (Computer Software Alliance) that was also a group of lawyers that sued people for piracy....but the BSA is totally
a creature of Microsoft. The CSA very rarely went after individuals and always gave a warning to a company a few times before taking action; it was an
association, even University students could sign up and stop people from claiming their little shareware program was written by the professor instead
of the little guy......but BSA is a vicious bunch of paid MS lawyers that hit deep pockets companies and struggling small businesses to make $ and
keep Windoz every cent!
When the dial-up BBS file sharing thing went on back in 1989-93 everyone knew that the MORE a product was OUT THERE, the more exposure it got and it
SOLD! Now MS wants every penny from Windows; it's getting TOO greedy! Most people didn't know that Bill Gates BOUGHT DOS, he never wrote it!, he
yanked the NT kernel from IBM's OS2 and MS joint venture and kept that too. If a product was competing he just bought it and put it on a shelf so it
wouldn't hurt his agenda. That's what happened with something called Visual Pascal, Fox-Pro, & collection of others.
SKUNK.....
|
|
not_important
International Hazard
Posts: 3873
Registered: 21-7-2006
Member Is Offline
Mood: No Mood
|
|
Quote: | he yanked the NT kernel from IBM's OS2 and MS joint venture |
No, while the Microsoft-IBM deal fell apart, because Microsoft wanted to change the API from an ehanced OS/2 one to being an extended Windows one, NT
had very little to do with OS/2. Instead Microsoft hired Dave Cutler and others from DEC, where they had been working on RSX-11 and then VMS; Cutler
was the designer and lead developer of both, and had been on the work that lead to the VAX.
Cutler had been lead on a project at DEC for a new processor design - PRISM and OS - Mica. Those were canceled, Cutler decided to leave DEC, but
Microsoft made him an offer before he left.
Those who had worked with VMS, and you could learn a lot about the OS because DEC shipped the source code with their machines, noticed similarities
between VMS and WinNT. In fact, DEC itself noticed this, engineers told upper management about it, DEC prepared to sue Microsoft but first went to
them with a package of evidence. Microsoft rolled over and gave DEC a number of things including cash, and got off cheap - a court case with DEC
pushing for damages could have resulted in a much larger payout as there are a lot of similarities between VMS and NT.
Again, I knew people working at DEC West at that time, but there's plenty of stuff written on it:
http://web.archive.org/web/20020503172231/http://www.win2000...
http://everything2.com/title/The+similarities+between+VMS+an...
http://www3.sympatico.ca/n.rieck/docs/Windows-NT_is_VMS_re-i...
But true, DEC bought companies and shelved them, when those companies had products competing with a current or planned Microsoft product. I know they
bought a company with a low end desktop publishing product and shut it down, sending packages to the dumpster - I know people who grabbed copies
before those were trashed; MS never did go on to complete their own DTP product.
|
|
12AX7
Post Harlot
Posts: 4803
Registered: 8-3-2005
Location: oscillating
Member Is Offline
Mood: informative
|
|
Quote: Originally posted by len1 |
@ax Why dont you read posts thoroughly before replying. What you have linked to is third party software that registers a device driver to which you
interface to get port access. Thats the reason you are able to do it, because youre using work someone else has done. |
Yes, so? Is it supposed to bother me that I didn't "do the work"? It must bother you deeply that I suceeded so easily, while you had to go through
so much drudgery to reach the same state.
Using someone else's code is commonplace in programming. For example, you didn't write the operating system you're using.
Quote: | The penalty you pay is that access is slow - which is why you reckon its not really practical. |
No, those are seperate issues.
As far as I know, access is limited by the port speed. With the processor completing subroutine calls in the nanosecond range, the parallel port's
actual clock rate is by far the limiting factor. I don't even know where timing is generated; is it synchronized to the 33MHz PCI bus, or is it
limited to the 8MHz ISA bus the port comes from? The data rate can't be too high or cable reflections would totally disrupt communications with any
device.
Meanwhile, does the processor whack through a bunch of instructions, putting output data into a cache buffer, which filters through the north and
south I/O bridges, eventually ending up at its destination? Does the processor stall for hundreds of cycles waiting for the I/O device to accept more
data? Is it able to switch tasks in that time and process other data? If it reads asynchronously, does it have to wait for the next cycle of 8MHz or
whatever to read it?
Whatever the case, the whole process must be astonishingly inefficient. A processor that can chug gigabits of bandwidth tasked with twiddling a
hardly megabit port just isn't practical.
The other issue is timing.
Quote: | I am able to access ports using my own code at the maxiumum rate of which the computer is capable. It also means if I played tones - they arent
garbled. And you save on the cost or design time of a microcontroller and usb driver if you dont need one. |
Can you really? Have you measured it? Have you tried, for instance, writing a software UART? Tones aren't actually such a great example, because
the ear is a poor judge of spectral purity. (I once built a tone generator with a Z80, using a 16 bit programmable one-shot counter to generating
timing. When it reaches zero, it stays at zero and fires an interrupt. Meanwhile, the processor is executing a loop, so it will take 5-12 cycles to
respond to that interrupt. At 4MHz clock and a 2kHz interrupt rate, the jitter is negligible, but it's certainly easy to measure with another
counter, and would make high speed serial communications impossible.) If you have guaranteed timing, it should be particularly simple to implement
these things.
I'm betting you actually haven't done any of these, and if you try, you'll discover it's impossible to get reliable results from Windows. The reason
is because your program is constantly being interrupted by task switching, on the order of miliseconds. The program just stops, and you miss a big
fat cycle of your waveform. The tone gets a hiccup, the serial data goes corrupt. These are things which a microcontroller is able to do.
Highspeed, buffered communications protocols can go between the cumbersome computer and streamlined microcontroller, so that communication is still
possible, while being easy to connect (USB is plug-and-play), and available through existing APIs (you don't want to bit-bang a USB port, and you
don't need to, you hook the I/O port from the OS).
Tim
|
|
len1
National Hazard
Posts: 595
Registered: 1-3-2007
Member Is Offline
Mood: NZ 1 (goal) - Italy 1 (dive)
|
|
Not only have I measured it but I control equipment with it to microsecond precision. Im a scientist and I do this professionaly.
But hey youve got the grace of a brick sh.t house. Fairly slow to understand things, while showering your posts with supposed deficiencies of others
which are actually gaps in your knowledge. I think weve seen this before with your understanding of transformers. I dont actually help people like
that.
|
|
hinz
Hazard to Others
Posts: 200
Registered: 29-10-2004
Member Is Offline
Mood: No Mood
|
|
It´s simply not possible to get accurate I/O timings in any normal OS. For this, real time operating systems were developed.
http://en.wikipedia.org/wiki/Real-time_operating_system
And 12AX7 is right, in the common operating systems, accurate I/O timings are not possible, because the thread scheduler quickly switches between
tasks.
And if a hypothetical kernel routine (with interrupts disabled by setting IF flag in flags register to 0) is executed at the time the CPU should
execute your I/O routine, the CPU completes this kernel routine and even more code until the IF flag is 1 again. Then it will handle the interrupt of
your application by jumping to the interrupt vector, probably int70, the system timer interrupt, and from there to the code pointed by the interrupt
vector. Depending on how long the IF flag was 1, the I/O work is delayed.
[Edited on 6-12-2009 by hinz]
|
|
len1
National Hazard
Posts: 595
Registered: 1-3-2007
Member Is Offline
Mood: NZ 1 (goal) - Italy 1 (dive)
|
|
OK. You and ax7 are right, and the set of gear Ive had operating in my lab for years running in real-time in Windows is just not there.
Learn to ask questions if you want to understand something.
Of course it could be that you dont want to learn anything and are just interested in joining the chest beating band. In that case you will come out
of it none the wiser and your reward will be just the chest beating.
[Edited on 6-12-2009 by len1]
|
|
not_important
International Hazard
Posts: 3873
Registered: 21-7-2006
Member Is Offline
Mood: No Mood
|
|
Both sides are somewhat correct in this.
You can get reasonable real time performance with an ordinary multitasking OS, provided that the CPU is not being used much, or at least not used much
by anything but the time critical code. This would be even more true with OSes such as Win9X, where it was easy to talk to the hardware and even take
it over, and there was no sharp boundary between the OS and application code.
As Len didn't mention which version of Windows he uses, what his code does, and what else runs at the same time, it's difficult to comment further on
that case.
What you can't get with a general purpose OS is deterministic and dependable operation. With a real time OS you can analytically determine if the set
of tasks running will always (baring hardware failures) be able to run their required functions within the required amount of time.
On a non-realtime OS, the performance generally decreases in a non-linear fashion, as the system load is increased there is a point where the average
wait for a task to run begins to increase much more quickly than the applied load. And worse it is not practical to determine just how much delay is
being introduced, if a task will be run within a required interval.
Certainly Windows could not do this, Microsoft told use so when they were attempting to convince us to us it. When pressed on how we would be able to
ensure a task would be able to run within the required time, they said "well, you can't for a given configuration, if there are problems you just use
a faster and faster CPU until it works." We did not go with Windows, but with a real time OS; as a result we were able to do more with an 8 MHz
80186 than a competitor could using a 25 MHz 89386.
I know of a case that illustrated the collapse of a non-RT OS rather nicely. There was a product that in part interfaced to telephone lines, a number
of lines as a PBX might. The first models maxed out at 8 lines, and the manufacture wanted to expand this number. Someone measured the delays in
servicing the telephone interface cards for a bench lash-up of an original and expanded system , and of an original non-maxed-out system in use in the
field. They then drew graphs of the timings and projected timings out to 50 lines, assuming by the data points a simple linear relationship, and
determined that calls could be serviced in less than 1.5 seconds - for humans a reasonable quick response time.
They built the expansion boxes, and sold them. Their first some customers where happy, things worked well. Then several customers had their expanded
systems go to hell, they were getting complaints of calls being answered quite slowly or not at all. Some time from the customer support engineers
showed that indeed you could call one of those systems and sometimes get prompt processing while other calls would give extended ringback for many
seconds - those calls weren't being answered quickly.
They finally cobbled together a testbed the same size as one of those customers system. No problem on handling calls placed there, but they could only
use a few lines at one. So someone made a version of the code running on the telephone interface cards that would fake incoming phone calls; it would
wait for a programmable interval, pretend it had seen it's telco interface go active, present a set of feed digits, and supply a dummy DTMF or voice
message.
When that was run with an increasing number of active lines, the system quickly fell apart. While the majority of calls would be answered within 2 to
4 seconds, a noticeable number took much longer with some taking as long as a minute to be answered. The graph of delay times vs number of calls
getting that delay value gave a classic long tail distribution, which would collapse into a nice skinny Gaussian peak matching their first system's
delays as the number of active lines was decreased.
It was their non-real-time non-deterministic OS. After they instituted a crash program to overhaul the system and get a proper OS in it, the testbed
delays were very close to those projected by the graphs done before the expansion project. IF you really pushed the load on the system the delay time
would increase; the delay graph curve would broaden a bit and slide towards increasing delays, but not nearly as much as the old OS and with none of
the outliers with huge wait times - every call would have about the same delay time.
As for allowing access to I/O space defeating the insulation of processes from each others ill behaviour, this is only a problem when that access is
not well controlled. On some OSes I've used a process could access whatever hardware it wanted, everything was effectively in supervisor mode. But
these were embedded systems where the only code running was what the designers had specified and written, a tightly controlled environment where any
mistakes were all your own.
On other OSes I've used access to I/O space was a bit like using a file system. With a file system you open a file and get a handle for it, yiu
read/write to that handle, you close the file when done. On these systems to access I/O space you requested the use of an address or addresses, if not
in use the OS recorded them as now belonging to that process, if in use by another process the request would fail. When actually doing I/O you did a
syscall, the requested address entry would be checked to see if the process ID match that of the requestor and would fail if not so. You could wiggle
bits all you wanted on the hardware at the I/O space addresses you had ownership of, but not those of any addresses belonging to other process. The
syscalls were actually macros that in one mode would compile to ordinary system requests, while in another mode intended for flat mode operation with
no U/S distinction they mostly turned into inline code, but they still did the I/O space registration a global table matching that used by the
supervisor in the other mode - giving a fair amount of protection against programming errors (the marking of the table, the request for ownership of
an address, was still a syscall and the table was read-only to user processes)
|
|
Polverone
Now celebrating 21 years of madness
Posts: 3186
Registered: 19-5-2002
Location: The Sunny Pacific Northwest
Member Is Offline
Mood: Waiting for spring
|
|
I haven't tried to do any realtime programming in more than 10 years, but it is not astounding on the face of it that microsecond precision could be
achieved with a modern general purpose operating system on modern hardware. One microsecond is thousands of clock ticks on a fast modern processor,
and several OSes including Windows permit setting task affinity to bind processes to CPU cores on multicore or multiprocessor systems, which should
further diminish the chances of system background work interfering at a critical time. Using an external microcontroller interfaced over USB seems
like an easier solution only if you're already quite familiar with that hammer.
PGP Key and corresponding e-mail address
|
|
quicksilver
International Hazard
Posts: 1820
Registered: 7-9-2005
Location: Inches from the keyboard....
Member Is Offline
Mood: ~-=SWINGS=-~
|
|
Quote: Originally posted by Polverone | I haven't tried to do any realtime programming in more than 10 years, but it is not astounding on the face of it that microsecond precision could be
achieved with a modern general purpose operating system on modern hardware....... . |
I also have not done anything in a GREAT period of years but when I last looked, what was popular were "Visual Languages".
I am surprised that anything of serious concern could be accomplished with material that insulates the user so far from the machine. .....Cute for
more bells & whistles on a bloated word processor but those are not real code-languages anymore.
NOTE: Before anyone gets offended becasue they did something nice in a Visual language -I don't mean to say they suck at everything imaginable.
It's just that when you came up looking at a text based screen that accomplished what you needed - & no waiting for some "re-painting " of the
screen....you just get apprehensive about a serious project using the ALL NEW:
"Microsoft Visual Grape Thunderbird" (with all horrid assembler sub-routines neatly tucked far away from the
user, so you won't have to type)
[Edited on 7-12-2009 by quicksilver]
|
|
pantone159
National Hazard
Posts: 589
Registered: 27-6-2006
Location: Austin, TX, USA
Member Is Offline
Mood: desperate for shade
|
|
Quote: Originally posted by quicksilver | I also have not done anything in a GREAT period of years but when I last looked, what was popular were "Visual Languages". |
"Visual" is usually just a name. E.g., I am currently writing software using MS "Visual Studio" and I assure you that every last bit of my code is
normal text. The development environment does have a GUI, with the debugger and such, but that is just about the development tools, not the program
code you write itself.
OTOH, I also (unfortunately) have to deal with another programming system that IS "visual", in that all commands are selected by picking from dialogs
and such, and it is ABSOLUTELY AWFUL.
|
|
Sedit
International Hazard
Posts: 1939
Registered: 23-11-2008
Member Is Offline
Mood: Manic Expressive
|
|
With all the talk of real time operations it seems to be based around windows OS. Most programming I use to play with along time ago was C++ MSDOS
based programes. Would these have the same issues had with programs running under windows rule or where these still technicly under the rule of
windows when I was writting them?
Knowledge is useless to useless people...
"I see a lot of patterns in our behavior as a nation that parallel a lot of other historical processes. The fall of Rome, the fall of Germany — the
fall of the ruling country, the people who think they can do whatever they want without anybody else's consent. I've seen this story
before."~Maynard James Keenan
|
|
len1
National Hazard
Posts: 595
Registered: 1-3-2007
Member Is Offline
Mood: NZ 1 (goal) - Italy 1 (dive)
|
|
Quote: Originally posted by not_important | Both sides are somewhat correct in this.
You can get reasonable real time performance with an ordinary multitasking OS, provided that the CPU is not being used much, or at least not used much
by anything but the time critical code. This would be even more true with OSes such as Win9X, where it was easy to talk to the hardware and even take
it over, and there was no sharp boundary between the OS and application code.
As Len didn't mention which version of Windows he uses, what his code does, and what else runs at the same time, it's difficult to comment further on
that case.
|
Quote: |
haven't tried to do any realtime programming in more than 10 years, but it is not astounding on the face of it that microsecond precision could be
achieved with a modern general purpose operating system on modern hardware. One microsecond is thousands of clock ticks on a fast modern processor,
and several OSes including Windows permit setting task affinity to bind processes to CPU cores on multicore or multiprocessor systems, which should
further diminish the chances of system background work interfering at a critical time. Using an external microcontroller interfaced over USB seems
like an easier solution only if you're already quite familiar with that hammer.
|
Ah well I didnt provide any details because my standard response to chest beating Neanderthals is not to reward their behaviour whose main
contribution is to stiffle the forum. But civil dialogue is a totally different matter.
I have used most windows systems, windows98, NT, XP to run equipment such as mass spectrometers, FTIR, digital recorders which have communication
protocols requiring precision microsecond timing. I do this without any additional hardware in the form of plug-in cards, or uprocessor interfaced
USB, by using the parallel port of the computer. I do this because the latter is adequate and things run for years with no glitches. I am afraid I
can not see how designing an additional micro card is simpler, on the contrary it is a whole pile of extra and unnecessary work, which since my main
task is to do science, I do not need.
I get windows to do this in several ways depending on the situation. Task switching is done using interrupts. In all cases I disable interrupts in
device drivers during critical code portions, which ensures deterministic program flow provided NMI does not occur. The NMI does not occur during
normal operation in all system I have encountered. In some instances, it has been necessary to trap interrupts and write interrupt handlers if you
want your computer to have network communication or precise timing etc. If not, as is usually the case, the simple approach above suffices. The data
gathered can then be analysed using matehmatical programs on the same computer.
But this is a diversion in this thread The point is that usoft is claiming that allowing programmers access to ports is insecure. However you need
port access so the computer can do things. They solved the problem by having you purchase their DDK. Now you have port access. So what has that
achieved except money in usoft coffers and dependence on them? Thats the point - the way manipulation works these days. One dupes the public to
think you are solving their problems, whereas in fact you are stuffing your pockets. To this category I think we can add many of todays 'big issues'
propagated by the highly informed mass media: the war on terror, the war on drugs, climate change, etc. And Usoft has added a contribution: the war
on piracy.
[Edited on 8-12-2009 by len1]
[Edited on 8-12-2009 by len1]
|
|
Pages:
1
2 |
|