Monday, December 28, 2015

Systemd is not the end of the world but someone needs to save us from it

This post has been months in the making. I believed I've reduced the rant-iness to a minimum level. It add nothing to the discussion elsewhere on the Internet. It is however, a burden off my chest.
I became aware of systemd as an init system in recent months as it gained traction in Fedora and it's competition with upstart. I didn't give it much of an interest as it primarily dealt with services that are more associated with the desktop: Plug and Play, power management etc. Imagined my surprise when I installed CentOS7 and it was the default init option. And going back wasn't an option.
The discussion around it's adoption has been intensive. This is a website that documents the fallacies in the arguments against for it and the follow-on discussion for a now-dead website advocating for boycotting it.
 My main beef with systemd is philosophical. Systemd's complexity and how it goes against the concept of "many simple programs doing basic jobs well working together" is not where Linux should be going. While many deride this backward-looking, that concept has served us well and has brought us this far. Yes, I do know that not everything that has brought us this far can carry us into the future but that point is evident only in hindsight. We should try new things but not at the cost of what works.  They should also be subject to discussion and mutual agreement. And finally there should be a transition between what is to what will become.
The opposite of this concept is "one big program doing many things complicated things", the best example being Windows. While some point out that systemd is a collection of applications, they are being developed together as a whole and made inter-dependent.
Another beef is the sense of (or lack thereof) of community around the development. The fan-boy-ism around it and the attitude of the developers (e.g the attitude towards corruption in binary logs) is alienating. Although it runs both ways, it is more healthy on the sysvinit side of things. I understand why there is a strong push-back from parts of the Linux community. This is the age-old difference between Linux users that are excited by solving problems and those that like to go home at the end of the day. Systemd is this complicated box / tool that can be used to solve complex problems in a complex way.  

Thursday, September 24, 2015

End of another hiatus

I've been quiet for some time because I've had quite a few interesting gigs. Basically, I was thrown deep into the corporate end of the enterprise IT pool. And enough time has passed that I can share something about it.
I won't do that right now. Not the details. It's enough to say that it has reinforced some of the things I already know to be true. Like the difference between open source software and commercial enterprise software is just polish and the benefits that come with a large user base. Like finding obscure bugs. Really,  the average feature set and functions are the same. It's just that commercial software can be easier to configure (sometimes at the cost of flexibility) and someone is selling it. The user may say that what they're looking for is support but really they're just waiting for someone to sell them something.
There were a few surprises, though. Like however great a system can potentially be, it's the way it's used that can make an ocean of a difference.
Look out for this space for more.

Wednesday, September 23, 2015

Dual booting Windows 8 and Mageia 4: Part 1 - The Prep

I wanted to install Mageia on the Lenovo laptop as soon as my work allowed me to mess around with it since it was now my main system.
My main worry was UEFI, Microsoft's effort to make it harder to install anything else along Windows in the name of security. After going through so many detailed explanations, I'm still not convinced that it was a good move. Protecting anything is a good idea so long as the bad guys don't crack the key. But with a target as valuable as Windows, the cost spent to crack the key may be justified. I figured it would be best to see how other people were doing it. I found this great guide and how other Mageia users are dealing with it. In this day and age, there was already someone who shared how they did it on YouTube.
After reading some more, I found out that Mageia 4 does already some of the work needed already (https://wiki.mageia.org/en/UEFI_how-to). This convinced me that the risk I would mess up everything was not that high (Famous last words).
I tried to shrink the 400GB windows partition down into 100GB but could only go to 200GB
cause is unmovable files. Now I'm thinking: Do I "screw it and use gparted" or do I "do the safe way". Since I don't have a backup copy of Windows 8  and I need A USB stick with 16GB to create a bootable recovery backup, I chose the safe way. Basically I disabled the windows system settings that were preventing me from shrinking the volume. These posts were helpful in getting me to that point.
http://ubuntuforums.org/showthread.php?t=2087466
http://www.download3k.com/articles/How-to-shrink-a-disk-volume-beyond-the-point-where-any-unmovable-files-are-located-00432
I was able to shrink the partition to about 70GB and re-enabled the settings back, doing all the reboots that were required along the way


Sunday, December 28, 2014

Login failures and the joy of Linux

Screenshot of Xubuntu 9.04's login screen
Linux login screen (Photo credit: Wikipedia)
I have only Linux running at home, most of them Mageia. Which means that I am also Technical Support. A few days ago, the kids complained that they couldn't log in on the shared computer near the kitchen..I tried logging in and after I entered my password, a dialog box appeared and said "The name org.gnome.DisplayManager was not provided by any .service files". Clicking on OK would land me back in the login page. Fortunately, I solved it pretty quickly.
I found out that there was a power outage and the machine restarted with a filesystem error. It fixed itself but then the error message came out. I reckon one of the config files got mangled and needed to be re-installed. If you are new to Linux, this is not as bad as it sounds. This isn't the only Linux box in the house, so I had options. My first guess was that the Mate / Gnome config file in my directory was messed up. So I logged in as root. It logged me in without any error.
A quick Google search did said that it was likely my GDM custom.conf file was mangled. I compared it with my laptop's version and it was the same. Then I remembered that Mageia didn't use GDM but LightDM instead. And then I realized that all I had to do was switch Display Managers. Mageia came with about 4, so I was spoilt for choice. I opened up the Mageia Control Center and then chose Boot and then Display Manager. I chose GDM, saved and logged out. Problem solved.
Sorta. I will have to get around to fixing LightDM but there is no rush. GDM is almost similar and Mageia developers went to great pains to ensure all the graphics were consistent. So the difference my kids saw was that instead of a drop-down list with their names to choose from, their names were now in a dialog box list. It was something they saw for about 3 seconds and knew immediately what to do.
This is one of the reasons I love using and working with Linux. It not only gives you choices, those choices are modular to the point where one breaks down, chose another that does the same thing and move on. This would have been a major catastrophe on MSWindows. I'd be looking at a re-installation at least. If I knew, what file was corrupted, I could replace it but I wouldn't know whether it would be of the same version of the other MSWindows components.

Tuesday, November 11, 2014

Beware: Blogger deletes everything in HTML editing if you don't save

Some words of warning when editing in HTML in Blogger and what you should do every single time to avoid disappointment.
I am livid. I was working on a post for hours when I decided to edit something in HTML view of the editor. I saved the post before switching to HTML view. After some tweaking, I decided to drop the changes and revert. I clicked the Close (post) button and it warned me that all changes will be lost. I was fine with that and said Ok. When I opened the post it was gone. Blogger decided that since I didn't want to save, it should save nothing. Literally nothing. Blogger saved an empty page.
Lost all the work for the past hours. Pressing the back key sometimes switches to the past state but not this time. Blogger was serious. Even though I had saved and closed and opened my post several times, it didn't changed the state of the page. I was working within one page as opposed moving from page to page. I wonder how many people done this and moved to wordpress or tumblr in disgust. Perhaps this is one small way Blogger is killing blogging.
The way to avoid this and have backups before editing a post in HTML is use the preview function. Preview will generate the preview of the page in another tab. Then switch back and edit in HTML. If things go south, you at least have the text in the preview tab.
Thanks Blogger, for nothing. 

Monday, October 27, 2014

I touched my laptop screen and I liked it

I finally decided that I needed a new laptop. My 2008 HP Mini was really showing it's age and I wanted to do some work with VMs that would tax my desktop. I did my homework and was content to buying low end laptop, hoping that Linux would be able to detect the 'standard' configuration without much fuss. Through a surprising turn of events, I ended up with a Lenovo IdeapadS410p Touch, a laptop with a touchscreen. It was an Intel i5 machine with 4GB of RAM (which I bumped up to 8GB), both VGA and HDMI outputs and a DVD drive to boot.
English: Touchscreen
Kids love a touchscreen (Photo credit: Wikipedia)
So how did it came about that way? I had done my homework and gone to buy the Lenovo laptop that didn't have an OS bundled or the 'DOS version' they called it. How many people buying new computers remember what the heck is DOS, is another question. But that range only came with AMD CPUs and having done that in the past (and got nothing other than a warm lap and mediocre performance), I decided to go for the Intel version, the i5 specifically. But to keep my options open, I decided to also keep an open mind on the the AMD A10 CPU which was by most reviewers as good as the i5 although meant to compete with the i7s.
Next was to find someone who knew what they were talking about. Too many times, I have been besieged by salespeople who knew little about what they were selling. It was time to give the right guy their due. I finally found a chap who gave me several options and let me try the laptops. Finally, I decided to ditch the A10 and went firm with the i5. He found me two models that fit the bill, a Windows 8 machine with a touchscreen and the OS-free version without a touchscreen.
For some reason, the non-touchscreen Lenovo laptop was slightly pricier and was a different model range. I did get the notion that the guy wanted to get rid of it because it was an older model. A quick check showed it was still listed as current on the Lenovo website, so I figured that it wasn't all that old. I figured I might as well see what the fuss was about Windows 8 and the touchscreen interface.

Monday, September 15, 2014

Going Minty 3 - Solving why Gimp is opening PDFs on Chromium

Something I did not encounter on Mageia but cropped up in Linux Mint is something quite strange. It's strange because it also seems counter intuitive. Especially for a distribution that does so well in keeping things user-friendly. The odd thing that happened to me in Linux Mint was that Chromium opens PDF with Gimp.
A screenshot of the GIMP 2.2.8 raster graphic ...
GIMP 2.2.8  graphic software. (Photo credit: Wikipedia)
Now this is not too bad if you have a good PC. And it's not wrong either because Gimp can open PDFs and better still, edit them. But you want to open a multi-page PDF, Gimp will render each page up-front. Meaning that if the PDFs has a lot of pages, it's gonna take some time. If your rig has less than 1 GB of RAM, the wait becomes worse.
The solution is obvious: change the default setting or program for opening PDFs. Unfortunately, that didn't work for me. Set what ever it is, the default is set to Gimp. I do get a choice to switch to another program each time, but it tends to get annoying. So how does one change the default application. Apparently there is common program called xdg  that helps with opening of files. Applications under freedesktop.org call on xdg to help them open document files. So for Chromimun, after it downloads a PDF file, it calls on xdg to open it. xdg determines the actual viewer and passes the name of the PDF to the viewer for it to open. The definition for the 'actual viewer' is either set by the underlying environment (KDE, GNOME, etc) or by xdg itself. The command is as follows:

Sunday, September 14, 2014

Away and back

There is no way else to say it. I haven't posted much in the past few months. Simply put, work overtook free time. In fact, work overtook everything else. So much so, I had to come to a decision, choose work or everything else.
Don't get me wrong. I loved working with the people I've been working with the last few months. They were, and still are, some of the smartest, most positive people I've worked with. Whatever came our way, we took on the problems and dealt with them the best way we could, with whatever we had. We played with the hand we were dealt with, no excuses. Inclusion was a big theme. Information was shared freely and bullshit was called out without shaming and without shame. Getting things done was the song of the day and it drowned everything else.
English: An artist's depiction of the rat race...
(Photo credit: Wikipedia)
But it came with heavy costs. Free Time fell first. Health came next. I'm sure Sanity would have been the next casualty. It's a big problem for me because I've seen how lives and families were lost when work took over everything. I could learn from lessons past or forge ahead.
So I made the decision. I value my life and my family more than work. Work is money but having gained hindsight the others, I've saved some just for a rainy day like this. Money can always be earned elsewhere. But love is life. And I love my life.
I have a ton of posts in draft so expect to see more in the next weeks. Thanks for sticking around.

Saturday, March 01, 2014

What Facebook saw in WhatsApp and Liked it enough to buy them

Sizing up WhatsApp and Twitter
Sizing up WhatsApp and Twitter (Photo credit: Tsahi Levent-Levi)
A lot of people are scratching their heads about the Facebook deal with WhatsApp. Most of those heads are in the US. They just can't see why Facebook would pay so much money to a company that charges a dollar a year to use it, with the first year for free. In fact, it seems that WhatsApp seems to be looking for ways to give itself away for free. In the early days, all you had to do to get another year for free was to uninstall and reinstall the app. In some countries, using WhatsApp doesn't count against the data cap.
So what is Facebook really buying? It's very simple: Facebook is buying users. The popularity of WhatsApp in the rest of the world is so huge that it dwarfs so-called popular messaging platform. But what makes it most interesting is how loyal users are to it. Rather than bore you with numbers, here are the 5 reasons it is so popular and why Facebook splurged serious cash for it.
It's cross-platform where it matters.
To a lot of people, especially on IOS,  WhatsApp was the way they communicated with their non-iPhone friends. It was also the app Blackberry users told their friends to install if they wanted to send messages to them ala BBM. Using WhatsApp allowed you to join your friends on BB and iPhones.
While messaging platforms in the past were also cross-platform, the platforms they covered were traditionally computer-centric. WhatsApp is all about mobile platforms, from IOS and Android to all the way to the common Symbian phones. Which makes it accessible to more people than PCs. For the younger generation, especially in the rest of the world, a smartphone is their first computer. Which is partly why there are so many active WhatsApp users.
It ties in with your phone number. This is the secret sauce. WhatsApp identifies you by your phone number. At first glance this may not be a big thing. But by making your phone number your unique ID, it ties you, the WhatsApp user, with a verified ID. Your phone company verified you as a paying customer, their definition of a "person". Different phone companies have different regulations for who can have a phone number. Each country has their laws regarding phone number ownership. WhatsApp rides on these laws and regulations to ensure that the phone number being registered to WhatsApp actually belongs to a person. This, plus the fact that users can only message to people in their phone book or to groups that they can leave any time, raises the bar of entry to bots and spammers. 
Plus having a globally unique ID like the phone number is a programmer's dream. They now have a way to follow you from phone to phone and keep you connected to your friends. Switch your handphone, even switch to another platform. all you have to do is insert the sim card, install WhatsApp and you start getting your messages and continue discussions in your WhatsApp groups. For those of us who can't figure out how to transfer contacts, this is really useful because your friends' names appear next to their phone numbers in the discussions. You can then add them back into your contacts in the new phone.

Monday, February 24, 2014

Recover from a bad superblock

When things go really bad, you may not be able to recover a disk. In those times, think of salvaging the data, reformat and live to fight another day. Consider how valuable the data is versus the time spent on repairing something that is damaged and may not be salvagable. testdisk photorec ddrescue are the tools to think of when you come that decision
But I do enjoy a challenge and when a USB disk was brought to me with mounting problems, I just couldn't pass it up. It was an uncommon setup. The USB stick had two partitions, one with an ext3 filesystem and the other with FAT32. I decided to focus on the ext3 filesystem first.
FSCK
FSCK (Photo credit: SFview)
To cut a long story short, my efforts to mount the disk met with screens full of error messages and cryptic clues as to what went wrong. Running fsck seemed to clean it first but it still would not mount the partition. Running fsck again would yield more and a different set of errors. My previous boss love to used the expression "time to decide: Fish or cut bait". It was one of those times.
This is probably the last ditch effect before you make that fateful decision. This is the line in the sand and the one you have to cross before deciding to put your effort in getting the data out and start all over again.
The recovery process involves rewriting the information about the partition. Specifically, reinitializing the superblock and group descriptors. However, reinitializing does not touch the data part of the partition. It does not touch things like the inodes and the blocks themselves. So by starting out with a 'fresh' set of information that is used to mount the disk, there is a possibility that the data may still be readable. After that, the data part gets checked and hopefully what you end up with is a filesystem that can be mounted properly.
The process can only be done when the partition is not mounted. If you have tried other ways, it most probably isn't. Mine wasn't, obviously.
So here's the process.
1. First, figure out the block size of the USB drive (in this case /dev/sdf1). I need that information to re-build the partition information. Run the command
dumpe2fs /dev/sdaf1 | grep 'block size' -i
Block size:               4096

2. Then format the superblocks. The command below won't format the whole partition, only the superblocks. It is critical that you use the correct block size gathered from the previous step
mke2fs -S -b 4096 -v /dev/sdf1

3. Now that the partition information is 'fresh', I checked the inodes to figure out what else could be wrong with the filesystem. Remember ext3 = ext2+journalling. So, ext2 tools still work
e2fsck -y -f -v -C 0 /dev/sdf1

4. Now that I've done with one element of the ext3 equation, it time to fix the journalling system or more specifically the journal data .
tune2fs -j /dev/sdf1

5. Re-attempt to mount the partition. If everything went well, you should be able to mount the partition and read the data.

After that, for hard disks, you have to determine whether the disk has reached it's threshold limits. Things like SMART properties will help you get that information.

Interested to know more: http://ubuntuforums.org/showthread.php?t=1681972&page=5&p=10434656#post10434656

Enhanced by Zemanta

Sunday, February 16, 2014

Is Ubuntu is licencing Linux? Canonical looking for value in the wrong places

Linux Mint 11
Linux Mint 11 (Photo credit: Wikipedia)
Full disclosure: I am no fan of Ubuntu. I applaud their efforts to put Linux in as many hands as possible with the free CD distribution effort but I'm of the opinion that Ubuntu puts itself above Linux while riding on the contribution of open source developers to Linux in general. I applaud their focus on making Linux user-friendly but I'm of the opinion that their effort is no more better than of other distro developers like Mandrake/Mandriva in the past. To top it off, I've predicted the path Ubuntu will take eventually once it has decides it does not need the community any more.
So it comes to no surprise the latest move by Ubuntu to protect 'it's intellectual property' is to licence Ubuntu. Sounds harsh? Some people will think I am being unfair using language normally used to describe Caldera. How else should I react when Canonical is asking derivative distros to sign a license to use 'Ubuntu binaries'? Ubuntu apologists have already made their stand known. They have made light of the gravity of the act of demand to licence and trying to convince us that the issue is about protecting the Ubuntu brand when it comes to derivative distros, Linux Mint, specifically.
I have ask: Why Linux Mint specifically? Does Canonical ask the same from Kubuntu and lubuntu? Is it because Linux Mint is becoming increasingly popular at the cost to Ubuntu? I've been thinking about writing of the possible danger of other distros basing their work on Ubuntu and how dangerous it is to base their work on a source that is actively consolidating their hold on it. I guess I don't have to now.
Really, I don't. At the end of this post are links to articles that go into this deeper.

Monday, November 25, 2013

In the shadow of DOS: DESQView and DESQView/X

There is a wonderful article in ArsTechnica that summarizes the story of OS/2, the competitor to Windows at it's infancy. I'll have something to say about the article later.
The article made me pull out a draft I had kicking around on old-school tech. I had written on old-school tech like PCGEOS and TFS before but this article reminded me of on another legacy technology that is no longer around: memory managers and the unintended competitor one memory manager spawned.
When PCs still ran DOS (or command line to you younger guys), it had a big weakness: DOS programs can't use more than 640kb of memory. As programs got bigger, there was a need to use memory above that limit. Then programs got weird, they wanted to stay running in memory while you ran another program. These were called TSR (terminate and stay resident). There were programs that displayed alarms or provided a function that could be called on at any time. These programs caused more memory to be used.
The memory manager was born. They allowed more memory to be used by swapping blocks of memory from under the 640kb limit with blocks of memory above the limit, fooling DOS into thinking it's still using 640kb of memory. The gory details of DOS memory management can be found here. The best memory manager was QEMM. It allowed more programs to run at once simply be making more memory available. But it soon took that to another level with a companion product called DESQView.
DESQView running DOS programs
in windows
Before DESQView, a program had to be programmed to be able to become a TSR. But with DESQView, any program can become a TSR. This allowed for the ability to switch to another application and then switch back, without stopping the first application. There were other programs that could provide that function. But DESQView also allowed DOS applications to run within something called "windows".  This meant that applications that were programmed to run full-screen could now run in a smaller virtual screen or better known as a window. Some graphical applications could run inside a window, too. The picture shows 2 full-screen applications, WordStar and Lotus123, running at the same time. The top blue window shows other programs that are ready to run.
If it run xeyes, it's XWindows.
Quarterdeck, the company behind both programs, upped the ante with DESQView/X. This was a program simply too far ahead of it's time. It integrated an X Windows server with
DESQView. Not many people could fully wrap their heads around what that meant. It was mainly marketed as a GUI interface for DOS and a Windows alternative. It also provided a tiled interface to launch DOS applications. With some applications that used smaller fonts, it allowed them to run on-screen at the same time with programs using normal-sized fonts. It even allowed graphical applications, like AutoCAD, to run in a window alongside normal text-based DOS programs. Some pictures in magazines even showed MSWindows running within DESQView/X. Although it wasn't virtualization, that feature did seem like it was, mainly because MSWindows was essentially a graphical DOS program.
That's right, Tiled GUI and Windows-within-a-window
circa early 1990s
What blew people's mind was that because it was an X Windows server running on DOS, a PC running DESQView/X could serve DOS programs to X Windows workstations (XWindows terminals still needed a start-up system for booting, IP management etc). If you are not familiar with X, the concept allows for Unix workstations to run DOS programs because they were running on the PC's CPU. Unfortunately, it didn't blew that many people's minds. The fact that the configuration meant bypassing some licensing restrictions also meant that if it was ever popular, it would have been shut down anyway.
If you are interested in old technology and want to experience how it was done in the old days, there are now sites that give guides and clues as to how to rock it old school. Apparently, you can get QEMM and DESQView from here. Quarterdesk was bought by Symantec but I'm not sure what they did with the technology they bought. My guess is that they bought it for patents.

Wednesday, September 18, 2013

Fix CMOS Battery Issue with NTP: the NetworkManager Edition

While in the previous article, I described a solution to trigger a script when the network goes up. This works on most server Linux setups but it does not work if you are using NetworkManager. NetworkManager has a different way of doing this. If you are not sure whether you are using NetworkManager, consider this: if you are using a wireless network and configuring it using an applet in the task bar or through a GUI program, it's likely that you are using NetworkManager. That and the fact that the script in my previous article didn't work.
English: The default NetworkManager applet
NetworkManager applet (Photo credit: Wikipedia)
NetworkManager can still do it. It can run scripts after a network interface is connected or disconnected. It's just not apparently forthright about it. But first a little clarification. In Linux parlance, when a network is connected, it is known as "a network being brought up". This is because the network interface state is changed from DOWN to UP.  And vice versa. When talking about network connections, both set of terms are used interchangeably.
NetworkManager offers this feature through a mechanism called a dispatcher. Basically, the dispatcher looks into the /etc/NetworkManager/dispatcher.d directory and run scripts saved in there. Scripts start with a two digit number that determines the order they are run. The scripts are passed two parameters from the main NetworkManager system. They are the network interface that was connected or disconnected and whether it was connected or disconnected.
In my case, I didn't care which network interface it was but it had to be just connected. So the script looked something like this.

#!/bin/sh
#
# Run ntpdate when network interface goes up using NetworkManager
#
export LC_ALL=C
if [ "$2" = "up" ]; then
        /usr/sbin/ntpdate -u pool.ntp.org
        logger Updated time due to network interface going up
fi

Save the file as 10-ntpdate.sh and change it's attributes to be executable. Drop that into the /etc/NetworkManater/dispatcher.d directory. That's it.

Thursday, September 12, 2013

Solve CMOS Battery Issues with NTP

A Hewlett-Packard Mini 1000 netbook computer, ...
Hewlett-Packard Mini 1000 netbook Photo credit: Wikipedia)
I guess it was time. My HP Mini 1000 netbook was giving me the wrong time and date every time it booted up. It made going to websites with HTTPS impossible because I was apparently accessing them from the past. Resetting the time didn't work because it would forget the current time and reset back to 2002. I would then manually reset the time using ntpdate.
After a few times, I got tired of this and decided that there is a better way. Since the netbook is connected to the Internet most of the time, I knew that a script could be triggered to run every time the network card started up. All I needed to do was to add the ntpdate command and options to it. Problem was I didn't know what script it was. I wasn't also big on making a custom change that would affect future updates.
I knew the scripts that set the network configuration was in /etc/sysconfig/network-scripts. My network interface family was eth so the script that set it up was /etc/sysconfig/network-scripts/ifup-cfg. At the of end of the file was the command

exec /etc/sysconfig/network-scripts/ifup-post ${CONFIG} ${2}

Reading the /etc/sysconfig/network-scripts/ifup-post file, I found the following command at the end.

if [ -x /sbin/ifup-local ]; then
    /sbin/ifup-local ${DEVICE}
fi

However, the file /sbin/ifup-local does not exist. So I created one with the ntpdate -u pool.ntp.org command in it.  So now, every time the network is configured, the time is correct.
I know it doesn't address the problem of the battery being dead or needing replacement but it'll do for now.

This didn't work for you? Maybe you're using NetworkManager. Run NTPDate automatically with NetworkManager too.

Enhanced by Zemanta

Wednesday, September 11, 2013

Reveiw of Blogger for Android 2013

Updated September 2012.
I am posting this on Blogger for Android. I am happy to report that it is a better app than the previous version. I am also not happy to report that Google has delivered a product that seems to be uneven and confusing to first time users, again.
There are many things to like about this app. All of your Blogger blogs and their posts are immediately accessible. It is faster and more responsive. Switching between blogs is almost immediate. It does feel like the Blogger App is pulling the RSS feed of the blog instead of pulling it directly from the Blogger system but it still works quickly. (Which should help explain the problem I had were the Blogger app just stomped over a published post).
This is icon for social networking website. Th...
(Photo credit: Wikipedia)
If you are a Blogger user and need to note ideas for posts and work on drafts, this is what this app seems to be built for. However, since there is little or no layout formatting options, sites whose design language calls for a specific layout (e.g. each post has to have a related photo on the left side of the post), will find that final posting will still have to be done at the PC. Which is odd is because one of the highlighted features is the ability to include photos taken from the phone's camera. 
In fact, if you are thinking of using Blogger for Android as part of your blogging workflow, consider this advice: opening a saved or published post is a multi-step processs. First, the app will show the complete post, including labels. Then you need to select or activate/touch the post text to edit it. The good news is that it now hides the HTML formatting and show the complete formatting. 
Post settings are still missing from the Blogger App. But handling labels has improved. There has always been the ability to add labels to the post edited but now when you start typing, you get a list of previously used labels. Which is really useful if you have many labels and can't remember which one to use off the top of your head. There is still no Schedule option to schedule when you post will be posted. This makes using the App to post on Blogger seem like an all or nothing proposition. And knowing other Blogger users, they will miss the powerful SEO-friendly option of providing a search description because it still isn't available.The Blogger App for Android (2013) is now good to use for general blogging but leans towards immediate posting.
It could be that Google is now leaning towards Google+ a blogging platform. However, Google+ is still missing a killer feature: to customize the layout and therefore the user experience. If you blog using Google+, your posts is just a post in a stream of other posts. You can set up Google+ Pages but your audience will mostly come from reading your post in their stream as opposed to going to your specific Google+ page.
Also is missing is a way to monetize your Google+ posts or page. This could mean people who do make money from Ads will not be interested in posting in Google+. They may post publicly there to reach a bigger audience but it'll just link back to their Blogger site. It may not be a big revenue generator for most but some prolific posters will find posting solely to Google+ to blog, a turn-off.
Again, download the Blogger App for Android because it is great for capturing ideas and first-drafts. It is also increasingly becoming an equal companion to the Blogger site. 
Enhanced by Zemanta

Tuesday, September 10, 2013

TFS Internet Gateway: The way to connect commercial mail systems in the mid 90s

TFS Brochure from Australia
Back in the mid-1990s, I was connecting businesses to the Internet. I was selling software to connect people to the Internet. This was back in the day when you had to buy software to connect to the Internet. It was even pre-Windows 95 times. What many people don't remember was that Microsoft wanted to charge extra for software to connect to the Internet. They called the add-on pack Windows 95 Plus Pack. I was selling a software suite called Internet-In-a-Box which competed directly with them. Truth be told, there were free software available to connect people to the Internet but this was early days and you had to have skills (like editing a text file) to setup things correctly. Strange thing was that I was still selling Internet in a Box pretty well even though competing with Microsoft. The Box had more software. Internet In a Box's big brother package, even had a sweet TN3270 terminal emulator. But I found out that people were buying it because the setup was easier and we provided phone support. Try getting that from Microsoft back then.
My boss and I saw the writing on the wall and we started to diversify. We were talking to a lot of companies that were interested in the Internet. But most of them were interested in the Internet as a resource not for communication. It's not like they didn't have e-mail systems. They did have e-mail systems for internal communications but they were built just for that: internal communication. Most of the systems didn't have the optional module of connecting the e-mail system to the Internet.
A little work on the Internet and I found this unique product from Sweden called TFS Gateway. They have been making a living building a system that allowed different commercial e-mail systems to talk to each other. They did this by using the mail system's API or mimiced a remote system via the e-mail system's remote gateway. TFS Gateway converted mail messages into a common format (pseudo X.400) before passing them on to their final destination. It supported Microsoft Mail, Lotus Notes and cc:Mail and Novell GroupWise. What interested me was that TFS Gateway also had a module that connected to Internet mail systems, specifically, an SMTP gateway. A market opened up as more and more companies saw the benefit of Internet mail communications and wanted to connect their systems to the Internet.

Friday, May 10, 2013

Will Ubuntu eventually go BSD?

At some point, I think, Mark Shuttleworth looked back and thought, 'I wish I chose BSD instead of Linux'. Imagine Ubuntu powered by BSD or FreeBSD instead of the Linux kernel. Crazy talk? Speculation? Definitely.
But the thought couldn't help cross my mind when I look back at what Ubuntu have been doing the last year or so. The trend is very clear. They are moving away from Linux and perhaps GNU Open Source.
First was the use of the term 'the Ubuntu kernel' instead of the Linux kernel. You would be hard-pressed to find the word Linux on the Ubuntu website or paraphernalia. I can't fault Ubuntu for maintaining brand prominence. But why at the cost of diminishing the Linux brand. Surely, Ubuntu is not ashamed of it's Linux core. Some people have pointed out that perhaps they want to distance themselves from their Linux heritage. To this, I point out that it is only a heritage when you are generation removed. Like Linux's Unix heritage. Ubuntu is still clearly dependent on Linux. In a way, I do see their point. Perhaps there are almost no Linux references in the website because they really want to prepare us for an Ubuntu without Linux. 
Then there is Unity. Maybe Ubuntu saw what I saw in the debacle of Gnome3. The sense was simply that the Gnome developers betrayed their user community. But instead of offering a safe haven for the majority that didn't agree with where Gnome is headed, Ubuntu saw this as opportunity to differentiate themselves even further. They created another environment, open source but definitely under their control. They were hoping that users will flock to that instead as an upgrade path from Gnome3. In the end, Unity didn't look much different than Gnome3, users faced similar issues and they even share design principles. Ubuntu just applied them to different parts of Ubuntu. And their goal is evidently the same, a touch-friendly, tablet interface. No matter how much the Gnome developers protest and claim otherwise, the proof is just in the result of their work. If fact, when taken from that perspective, both Unity and Gnome3 are really good. Problem is, most users still don't have touch screens. The Gnome developers may want their Star Trek dream to come true, most of us just want to check mail.
The biggest step Ubuntu has taken so far is to move their graphical display technology away from X Windows to something that they themselves developed called Mir. Let's get something clear first. X Windows has a lot of problems. It is very simplistic in nature, mainly because it was designed in the 80s. It even needs a separate program to manage the windows and make them move, maximize and do even the most basic functions. That doesn't even include what we normally expect from a GUI like cut and paste between applications. KDE and Gnome were built to fill the need for a graphical system that does more. But at the core is the fact that X Windows offered cross-platform compatibility. I remember selling linux boxes as replacements to expensive Sun and HP graphical workstations. The X Windows applications still used the powerful CPUs of the servers while the workstation only busied itself with managing the GUI. Ubuntu moving to Mir breaks this compatibility. They had originally planned on using Wayland to replace X. Wayland respected X and offered a way to coexist and interact with X. Mir doesn't seem to care about that. What it also means is that future Ubuntu users can't share their applications with other Linux (or even Unix) distros and vice versa. But that is only good for Ubuntu because it create a lock-in. Ubuntu say it really wants to build a graphical display system that could be used for both desktop and mobile platforms. If it locks in their users and makes applications written for Ubuntu exclusive to them, then what downside is there for them?

Monday, February 04, 2013

Going Minty: LinuxMint 14 MATE

I am not an Ubuntu fan. I stand firmly on the RPM side of the fence. Not for any one particular reason. If I could pinpoint it, it could probably be the pain I felt installing Debian for the first time. And the glacial pace of it's development. This from someone who began installing Linux from tgz files. So I shied away from anything deb based.
I also particularly loathe the fact that Ubuntu focused their efforts on the desktop (they were roundly criticized for not contributing to the kernel at one point) and then glorifying themselves as the Linux distro for everyone. They committed what many geeks consider a cardinal sin, put their name above that of Linux itself. In fact, while other distributions were calling themselves Linux-this or Something-Linux, Ubuntu decided that their brand was to be put forth in place of Linux. And that effort has worked. Ask most users who have heard of Linux and most likely they have heard of Ubuntu. In fact, I used to find people who know about Ubuntu and not heard of Linux. Worse off, they knew nothing about other Linuxes other than Ubuntu. "What's RedHat?". And when they started to refer to the kernel as the Ubuntu kernel, I was convinced my position to not support them was right.
But all that changed because of Gnome3. There is no secret how I hate Gnome3 and how it represents the Gnome developers attitude to their community of users. I have a problem because I have been using Mageia, having followed the community there after the split with Mandriva. And they have decided to focus on the KDE desktop while offering Gnome3 as an option. A lot of people think that in the past, Mandriva is a KDE distro but in reality support has been equal for Gnome. In fact, the pervasiveness of the Mandriva/Mageia Control Center makes the decision style choice more than a technology one. I am writing this on XFce on Mageia, which I consider a viable alternative, although at times it feels like a downgrade.

Thursday, January 24, 2013

Microsoft's invests in Dell to cut off Linux?

Update: It's official. Microsoft has a hand in taking Dell private to the tune of 2 billion. Is this a loan like the 'investment' was in Apple?
Dell is considering taking itself private. This is nothing odd, most successful companies with a large cash pile will consider buying back their public shares, taking them private. They would then do business as usual for a time before taking themselves public again, this time at a higher value. If you are interested why, there are some possible reasons why explained at the end.
So while Dell considering taking themselves private is not really interesting to technology watchers, the rumor that Microsoft and Silverlake Partners is reported to be involved in the buy back of those shares and coming in as investors is interesting. So why now? And why Dell? Microsoft has not been successful with hardware (other than their mouse and keyboards). Xbox is a success but it's success comes from games software and subscription services rather than just hardware alone. Case in point, the Microsoft Tablets and the Zune. So the question is what would Microsoft gain from having a say in the running of a PC manufacturer.
This could be another front Microsoft is pursuing against Linux. Dell made it no secret that they were investigating the viability of having Linux as a desktop OS option. The well reviewed Dell XPS 13 Developer Edition or Project Sputnik and regional versions of laptops pre-loaded with Linux are examples of their efforts. The adoption of Linux as a Desktop OS option by a major PC manufacturer would spell big trouble for Microsoft. While Dell has offered several Linux OSes such as RedHat and Ubuntu as options for a server OS, this has not been a real threat to Microsoft. This is because of the use of Linux on servers has a low consumer profile and the fact that Microsoft makes more money on the connection licences on the server than the sale of the server OS itself.
Another interesting twist to this story is the involvement of Silverlake Partners. While many know of their collaboration with Microsoft on the Skype deal, the people behind both companies share a more darker past in relation to Linux. Both are entangled in some way with SCO as it was launching it's legal battle against Linux.
Is there something more to this?

Tuesday, January 22, 2013

Blogger for Android review: Getting better

Google released a new Blogger for Android and I'm happy to report that it is an improvement over the past version. I blog from everywhere. I used to use the Springpad App to capture the initial ideas before forming them into posts. Then, it would be a cut and paste away from being posted via the Blogger web interface. Then, I switched to the previous version of the Blogger App. I used it at every opportunity I could. Then I hit a bug. I could live with idiosyncrasies of the app but when it became destructive, I ditched it. But before that, I had used almost all of the features it offered, perhaps working it enough to find that bug.
The best new thing about this version is the way it allows you to manage posts. In the past, it used to be in one long list. But now, you can view them separately. You can safely view and edit  drafts and not have to worry about accidentally opening a published post.
I reviewed a more recent version of the Blogger app for Android here.

Recently Popular