Monday, November 25, 2013

In the shadow of DOS: DESQView and DESQView/X

There is a wonderful article in ArsTechnica that summarizes the story of OS/2, the competitor to Windows at it's infancy. I'll have something to say about the article later.
The article made me pull out a draft I had kicking around on old-school tech. I had written on old-school tech like PCGEOS and TFS before but this article reminded me of on another legacy technology that is no longer around: memory managers and the unintended competitor one memory manager spawned.
When PCs still ran DOS (or command line to you younger guys), it had a big weakness: DOS programs can't use more than 640kb of memory. As programs got bigger, there was a need to use memory above that limit. Then programs got weird, they wanted to stay running in memory while you ran another program. These were called TSR (terminate and stay resident). There were programs that displayed alarms or provided a function that could be called on at any time. These programs caused more memory to be used.
The memory manager was born. They allowed more memory to be used by swapping blocks of memory from under the 640kb limit with blocks of memory above the limit, fooling DOS into thinking it's still using 640kb of memory. The gory details of DOS memory management can be found here. The best memory manager was QEMM. It allowed more programs to run at once simply be making more memory available. But it soon took that to another level with a companion product called DESQView.
DESQView running DOS programs
in windows
Before DESQView, a program had to be programmed to be able to become a TSR. But with DESQView, any program can become a TSR. This allowed for the ability to switch to another application and then switch back, without stopping the first application. There were other programs that could provide that function. But DESQView also allowed DOS applications to run within something called "windows".  This meant that applications that were programmed to run full-screen could now run in a smaller virtual screen or better known as a window. Some graphical applications could run inside a window, too. The picture shows 2 full-screen applications, WordStar and Lotus123, running at the same time. The top blue window shows other programs that are ready to run.
If it run xeyes, it's XWindows.
Quarterdeck, the company behind both programs, upped the ante with DESQView/X. This was a program simply too far ahead of it's time. It integrated an X Windows server with
DESQView. Not many people could fully wrap their heads around what that meant. It was mainly marketed as a GUI interface for DOS and a Windows alternative. It also provided a tiled interface to launch DOS applications. With some applications that used smaller fonts, it allowed them to run on-screen at the same time with programs using normal-sized fonts. It even allowed graphical applications, like AutoCAD, to run in a window alongside normal text-based DOS programs. Some pictures in magazines even showed MSWindows running within DESQView/X. Although it wasn't virtualization, that feature did seem like it was, mainly because MSWindows was essentially a graphical DOS program.
That's right, Tiled GUI and Windows-within-a-window
circa early 1990s
What blew people's mind was that because it was an X Windows server running on DOS, a PC running DESQView/X could serve DOS programs to X Windows workstations (XWindows terminals still needed a start-up system for booting, IP management etc). If you are not familiar with X, the concept allows for Unix workstations to run DOS programs because they were running on the PC's CPU. Unfortunately, it didn't blew that many people's minds. The fact that the configuration meant bypassing some licensing restrictions also meant that if it was ever popular, it would have been shut down anyway.
If you are interested in old technology and want to experience how it was done in the old days, there are now sites that give guides and clues as to how to rock it old school. Apparently, you can get QEMM and DESQView from here. Quarterdesk was bought by Symantec but I'm not sure what they did with the technology they bought. My guess is that they bought it for patents.

Wednesday, September 18, 2013

Fix CMOS Battery Issue with NTP: the NetworkManager Edition

While in the previous article, I described a solution to trigger a script when the network goes up. This works on most server Linux setups but it does not work if you are using NetworkManager. NetworkManager has a different way of doing this. If you are not sure whether you are using NetworkManager, consider this: if you are using a wireless network and configuring it using an applet in the task bar or through a GUI program, it's likely that you are using NetworkManager. That and the fact that the script in my previous article didn't work.
English: The default NetworkManager applet
NetworkManager applet (Photo credit: Wikipedia)
NetworkManager can still do it. It can run scripts after a network interface is connected or disconnected. It's just not apparently forthright about it. But first a little clarification. In Linux parlance, when a network is connected, it is known as "a network being brought up". This is because the network interface state is changed from DOWN to UP.  And vice versa. When talking about network connections, both set of terms are used interchangeably.
NetworkManager offers this feature through a mechanism called a dispatcher. Basically, the dispatcher looks into the /etc/NetworkManager/dispatcher.d directory and run scripts saved in there. Scripts start with a two digit number that determines the order they are run. The scripts are passed two parameters from the main NetworkManager system. They are the network interface that was connected or disconnected and whether it was connected or disconnected.
In my case, I didn't care which network interface it was but it had to be just connected. So the script looked something like this.

# Run ntpdate when network interface goes up using NetworkManager
export LC_ALL=C
if [ "$2" = "up" ]; then
        /usr/sbin/ntpdate -u
        logger Updated time due to network interface going up

Save the file as and change it's attributes to be executable. Drop that into the /etc/NetworkManater/dispatcher.d directory. That's it.

Thursday, September 12, 2013

Solve CMOS Battery Issues with NTP

A Hewlett-Packard Mini 1000 netbook computer, ...
Hewlett-Packard Mini 1000 netbook Photo credit: Wikipedia)
I guess it was time. My HP Mini 1000 netbook was giving me the wrong time and date every time it booted up. It made going to websites with HTTPS impossible because I was apparently accessing them from the past. Resetting the time didn't work because it would forget the current time and reset back to 2002. I would then manually reset the time using ntpdate.
After a few times, I got tired of this and decided that there is a better way. Since the netbook is connected to the Internet most of the time, I knew that a script could be triggered to run every time the network card started up. All I needed to do was to add the ntpdate command and options to it. Problem was I didn't know what script it was. I wasn't also big on making a custom change that would affect future updates.
I knew the scripts that set the network configuration was in /etc/sysconfig/network-scripts. My network interface family was eth so the script that set it up was /etc/sysconfig/network-scripts/ifup-cfg. At the of end of the file was the command

exec /etc/sysconfig/network-scripts/ifup-post ${CONFIG} ${2}

Reading the /etc/sysconfig/network-scripts/ifup-post file, I found the following command at the end.

if [ -x /sbin/ifup-local ]; then
    /sbin/ifup-local ${DEVICE}

However, the file /sbin/ifup-local does not exist. So I created one with the ntpdate -u command in it.  So now, every time the network is configured, the time is correct.
I know it doesn't address the problem of the battery being dead or needing replacement but it'll do for now.

This didn't work for you? Maybe you're using NetworkManager. Run NTPDate automatically with NetworkManager too.

Enhanced by Zemanta

Wednesday, September 11, 2013

Reveiw of Blogger for Android 2013

Updated September 2012.
I am posting this on Blogger for Android. I am happy to report that it is a better app than the previous version. I am also not happy to report that Google has delivered a product that seems to be uneven and confusing to first time users, again.
There are many things to like about this app. All of your Blogger blogs and their posts are immediately accessible. It is faster and more responsive. Switching between blogs is almost immediate. It does feel like the Blogger App is pulling the RSS feed of the blog instead of pulling it directly from the Blogger system but it still works quickly. (Which should help explain the problem I had were the Blogger app just stomped over a published post).
This is icon for social networking website. Th...
(Photo credit: Wikipedia)
If you are a Blogger user and need to note ideas for posts and work on drafts, this is what this app seems to be built for. However, since there is little or no layout formatting options, sites whose design language calls for a specific layout (e.g. each post has to have a related photo on the left side of the post), will find that final posting will still have to be done at the PC. Which is odd is because one of the highlighted features is the ability to include photos taken from the phone's camera. 
In fact, if you are thinking of using Blogger for Android as part of your blogging workflow, consider this advice: opening a saved or published post is a multi-step processs. First, the app will show the complete post, including labels. Then you need to select or activate/touch the post text to edit it. The good news is that it now hides the HTML formatting and show the complete formatting. 
Post settings are still missing from the Blogger App. But handling labels has improved. There has always been the ability to add labels to the post edited but now when you start typing, you get a list of previously used labels. Which is really useful if you have many labels and can't remember which one to use off the top of your head. There is still no Schedule option to schedule when you post will be posted. This makes using the App to post on Blogger seem like an all or nothing proposition. And knowing other Blogger users, they will miss the powerful SEO-friendly option of providing a search description because it still isn't available.The Blogger App for Android (2013) is now good to use for general blogging but leans towards immediate posting.
It could be that Google is now leaning towards Google+ a blogging platform. However, Google+ is still missing a killer feature: to customize the layout and therefore the user experience. If you blog using Google+, your posts is just a post in a stream of other posts. You can set up Google+ Pages but your audience will mostly come from reading your post in their stream as opposed to going to your specific Google+ page.
Also is missing is a way to monetize your Google+ posts or page. This could mean people who do make money from Ads will not be interested in posting in Google+. They may post publicly there to reach a bigger audience but it'll just link back to their Blogger site. It may not be a big revenue generator for most but some prolific posters will find posting solely to Google+ to blog, a turn-off.
Again, download the Blogger App for Android because it is great for capturing ideas and first-drafts. It is also increasingly becoming an equal companion to the Blogger site. 
Enhanced by Zemanta

Tuesday, September 10, 2013

TFS Internet Gateway: The way to connect commercial mail systems in the mid 90s

TFS Brochure from Australia
Back in the mid-1990s, I was connecting businesses to the Internet. I was selling software to connect people to the Internet. This was back in the day when you had to buy software to connect to the Internet. It was even pre-Windows 95 times. What many people don't remember was that Microsoft wanted to charge extra for software to connect to the Internet. They called the add-on pack Windows 95 Plus Pack. I was selling a software suite called Internet-In-a-Box which competed directly with them. Truth be told, there were free software available to connect people to the Internet but this was early days and you had to have skills (like editing a text file) to setup things correctly. Strange thing was that I was still selling Internet in a Box pretty well even though competing with Microsoft. The Box had more software. Internet In a Box's big brother package, even had a sweet TN3270 terminal emulator. But I found out that people were buying it because the setup was easier and we provided phone support. Try getting that from Microsoft back then.
My boss and I saw the writing on the wall and we started to diversify. We were talking to a lot of companies that were interested in the Internet. But most of them were interested in the Internet as a resource not for communication. It's not like they didn't have e-mail systems. They did have e-mail systems for internal communications but they were built just for that: internal communication. Most of the systems didn't have the optional module of connecting the e-mail system to the Internet.
A little work on the Internet and I found this unique product from Sweden called TFS Gateway. They have been making a living building a system that allowed different commercial e-mail systems to talk to each other. They did this by using the mail system's API or mimiced a remote system via the e-mail system's remote gateway. TFS Gateway converted mail messages into a common format (pseudo X.400) before passing them on to their final destination. It supported Microsoft Mail, Lotus Notes and cc:Mail and Novell GroupWise. What interested me was that TFS Gateway also had a module that connected to Internet mail systems, specifically, an SMTP gateway. A market opened up as more and more companies saw the benefit of Internet mail communications and wanted to connect their systems to the Internet.

Friday, May 10, 2013

Will Ubuntu eventually go BSD?

At some point, I think, Mark Shuttleworth looked back and thought, 'I wish I chose BSD instead of Linux'. Imagine Ubuntu powered by BSD or FreeBSD instead of the Linux kernel. Crazy talk? Speculation? Definitely.
But the thought couldn't help cross my mind when I look back at what Ubuntu have been doing the last year or so. The trend is very clear. They are moving away from Linux and perhaps GNU Open Source.
First was the use of the term 'the Ubuntu kernel' instead of the Linux kernel. You would be hard-pressed to find the word Linux on the Ubuntu website or paraphernalia. I can't fault Ubuntu for maintaining brand prominence. But why at the cost of diminishing the Linux brand. Surely, Ubuntu is not ashamed of it's Linux core. Some people have pointed out that perhaps they want to distance themselves from their Linux heritage. To this, I point out that it is only a heritage when you are generation removed. Like Linux's Unix heritage. Ubuntu is still clearly dependent on Linux. In a way, I do see their point. Perhaps there are almost no Linux references in the website because they really want to prepare us for an Ubuntu without Linux. 
Then there is Unity. Maybe Ubuntu saw what I saw in the debacle of Gnome3. The sense was simply that the Gnome developers betrayed their user community. But instead of offering a safe haven for the majority that didn't agree with where Gnome is headed, Ubuntu saw this as opportunity to differentiate themselves even further. They created another environment, open source but definitely under their control. They were hoping that users will flock to that instead as an upgrade path from Gnome3. In the end, Unity didn't look much different than Gnome3, users faced similar issues and they even share design principles. Ubuntu just applied them to different parts of Ubuntu. And their goal is evidently the same, a touch-friendly, tablet interface. No matter how much the Gnome developers protest and claim otherwise, the proof is just in the result of their work. If fact, when taken from that perspective, both Unity and Gnome3 are really good. Problem is, most users still don't have touch screens. The Gnome developers may want their Star Trek dream to come true, most of us just want to check mail.
The biggest step Ubuntu has taken so far is to move their graphical display technology away from X Windows to something that they themselves developed called Mir. Let's get something clear first. X Windows has a lot of problems. It is very simplistic in nature, mainly because it was designed in the 80s. It even needs a separate program to manage the windows and make them move, maximize and do even the most basic functions. That doesn't even include what we normally expect from a GUI like cut and paste between applications. KDE and Gnome were built to fill the need for a graphical system that does more. But at the core is the fact that X Windows offered cross-platform compatibility. I remember selling linux boxes as replacements to expensive Sun and HP graphical workstations. The X Windows applications still used the powerful CPUs of the servers while the workstation only busied itself with managing the GUI. Ubuntu moving to Mir breaks this compatibility. They had originally planned on using Wayland to replace X. Wayland respected X and offered a way to coexist and interact with X. Mir doesn't seem to care about that. What it also means is that future Ubuntu users can't share their applications with other Linux (or even Unix) distros and vice versa. But that is only good for Ubuntu because it create a lock-in. Ubuntu say it really wants to build a graphical display system that could be used for both desktop and mobile platforms. If it locks in their users and makes applications written for Ubuntu exclusive to them, then what downside is there for them?

Monday, February 04, 2013

Going Minty: LinuxMint 14 MATE

I am not an Ubuntu fan. I stand firmly on the RPM side of the fence. Not for any one particular reason. If I could pinpoint it, it could probably be the pain I felt installing Debian for the first time. And the glacial pace of it's development. This from someone who began installing Linux from tgz files. So I shied away from anything deb based.
I also particularly loathe the fact that Ubuntu focused their efforts on the desktop (they were roundly criticized for not contributing to the kernel at one point) and then glorifying themselves as the Linux distro for everyone. They committed what many geeks consider a cardinal sin, put their name above that of Linux itself. In fact, while other distributions were calling themselves Linux-this or Something-Linux, Ubuntu decided that their brand was to be put forth in place of Linux. And that effort has worked. Ask most users who have heard of Linux and most likely they have heard of Ubuntu. In fact, I used to find people who know about Ubuntu and not heard of Linux. Worse off, they knew nothing about other Linuxes other than Ubuntu. "What's RedHat?". And when they started to refer to the kernel as the Ubuntu kernel, I was convinced my position to not support them was right.
But all that changed because of Gnome3. There is no secret how I hate Gnome3 and how it represents the Gnome developers attitude to their community of users. I have a problem because I have been using Mageia, having followed the community there after the split with Mandriva. And they have decided to focus on the KDE desktop while offering Gnome3 as an option. A lot of people think that in the past, Mandriva is a KDE distro but in reality support has been equal for Gnome. In fact, the pervasiveness of the Mandriva/Mageia Control Center makes the decision style choice more than a technology one. I am writing this on XFce on Mageia, which I consider a viable alternative, although at times it feels like a downgrade.

Thursday, January 24, 2013

Microsoft's invests in Dell to cut off Linux?

Update: It's official. Microsoft has a hand in taking Dell private to the tune of 2 billion. Is this a loan like the 'investment' was in Apple?
Dell is considering taking itself private. This is nothing odd, most successful companies with a large cash pile will consider buying back their public shares, taking them private. They would then do business as usual for a time before taking themselves public again, this time at a higher value. If you are interested why, there are some possible reasons why explained at the end.
So while Dell considering taking themselves private is not really interesting to technology watchers, the rumor that Microsoft and Silverlake Partners is reported to be involved in the buy back of those shares and coming in as investors is interesting. So why now? And why Dell? Microsoft has not been successful with hardware (other than their mouse and keyboards). Xbox is a success but it's success comes from games software and subscription services rather than just hardware alone. Case in point, the Microsoft Tablets and the Zune. So the question is what would Microsoft gain from having a say in the running of a PC manufacturer.
This could be another front Microsoft is pursuing against Linux. Dell made it no secret that they were investigating the viability of having Linux as a desktop OS option. The well reviewed Dell XPS 13 Developer Edition or Project Sputnik and regional versions of laptops pre-loaded with Linux are examples of their efforts. The adoption of Linux as a Desktop OS option by a major PC manufacturer would spell big trouble for Microsoft. While Dell has offered several Linux OSes such as RedHat and Ubuntu as options for a server OS, this has not been a real threat to Microsoft. This is because of the use of Linux on servers has a low consumer profile and the fact that Microsoft makes more money on the connection licences on the server than the sale of the server OS itself.
Another interesting twist to this story is the involvement of Silverlake Partners. While many know of their collaboration with Microsoft on the Skype deal, the people behind both companies share a more darker past in relation to Linux. Both are entangled in some way with SCO as it was launching it's legal battle against Linux.
Is there something more to this?

Tuesday, January 22, 2013

Blogger for Android review: Getting better

Google released a new Blogger for Android and I'm happy to report that it is an improvement over the past version. I blog from everywhere. I used to use the Springpad App to capture the initial ideas before forming them into posts. Then, it would be a cut and paste away from being posted via the Blogger web interface. Then, I switched to the previous version of the Blogger App. I used it at every opportunity I could. Then I hit a bug. I could live with idiosyncrasies of the app but when it became destructive, I ditched it. But before that, I had used almost all of the features it offered, perhaps working it enough to find that bug.
The best new thing about this version is the way it allows you to manage posts. In the past, it used to be in one long list. But now, you can view them separately. You can safely view and edit  drafts and not have to worry about accidentally opening a published post.
I reviewed a more recent version of the Blogger app for Android here.

Friday, January 18, 2013

Is Facebook Graph Search promoting insularity?

While I agree the science of search needs to be further developed, I don't think that Facebook Graph Search is a step in the right direction. Basically, Facebook Graph Search uses information that users have shared previously and then uses them as weights in searches. To generalize, we can search in stuff shared by other people who are connected to us. We can do perspective searches, searches that give results based on your perspective or scope or visibility of other people's Facebook data.
This isn't new. This search feature was previously available to advertisers and used to target ads to Facebook users. Look how successful that was. But that is not the point of this new feature. It's point is to add more data to Facebook. Specifically to find out what is the context or importance the specific shared or related Facebook data. With each search we do, we provide Facebook with more information about us. With each graph search we do, we provide Facebook with more information about that data that we have in common with the people connected to us. This is a mother-load of information for advertisers. They  constantly want to know the things that interest us so that they can lead us to them.
This used to have hilarious outcomes. That one-off search you did for your mother on recommendations for skin rash cream? Facebook associates that search with you. Were you peppered with adverts for hemorrhoid cream and mature relationship/dating sites? Facebook can't be blamed totally because that is all they have on you. They need more data to make your future searches more relevant. Maybe with this new data, Facebook will be able to differentiate on-off searched from searches that you regularly do. Perhaps, Facebook will notice that you haven't searched for skin rash cream in six months and stop sending you hemorrhoid cream adverts. If it was too smart, it would then assume the worse and start sending you adverts on will generation and estate planning. If you decide to stop using Facebook out of disgust, maybe Facebook will take this data, assume the worse and start sending your family members ads for caskets and funeral homes. Ouch.
How about it if Facebook uses the data it gathers from Graph Searches to target the ads for people around you? Will you start seeing messages that begin, "Which one of you guys searched for hemorrhoid cream?"

Recently Popular