Wednesday, September 18, 2013

Fix CMOS Battery Issue with NTP: the NetworkManager Edition

While in the previous article, I described a solution to trigger a script when the network goes up. This works on most server Linux setups but it does not work if you are using NetworkManager. NetworkManager has a different way of doing this. If you are not sure whether you are using NetworkManager, consider this: if you are using a wireless network and configuring it using an applet in the task bar or through a GUI program, it's likely that you are using NetworkManager. That and the fact that the script in my previous article didn't work.
English: The default NetworkManager applet
NetworkManager applet (Photo credit: Wikipedia)
NetworkManager can still do it. It can run scripts after a network interface is connected or disconnected. It's just not apparently forthright about it. But first a little clarification. In Linux parlance, when a network is connected, it is known as "a network being brought up". This is because the network interface state is changed from DOWN to UP.  And vice versa. When talking about network connections, both set of terms are used interchangeably.
NetworkManager offers this feature through a mechanism called a dispatcher. Basically, the dispatcher looks into the /etc/NetworkManager/dispatcher.d directory and run scripts saved in there. Scripts start with a two digit number that determines the order they are run. The scripts are passed two parameters from the main NetworkManager system. They are the network interface that was connected or disconnected and whether it was connected or disconnected.
In my case, I didn't care which network interface it was but it had to be just connected. So the script looked something like this.

#!/bin/sh
#
# Run ntpdate when network interface goes up using NetworkManager
#
export LC_ALL=C
if [ "$2" = "up" ]; then
        /usr/sbin/ntpdate -u pool.ntp.org
        logger Updated time due to network interface going up
fi

Save the file as 10-ntpdate.sh and change it's attributes to be executable. Drop that into the /etc/NetworkManater/dispatcher.d directory. That's it.

Thursday, September 12, 2013

Solve CMOS Battery Issues with NTP

A Hewlett-Packard Mini 1000 netbook computer, ...
Hewlett-Packard Mini 1000 netbook Photo credit: Wikipedia)
I guess it was time. My HP Mini 1000 netbook was giving me the wrong time and date every time it booted up. It made going to websites with HTTPS impossible because I was apparently accessing them from the past. Resetting the time didn't work because it would forget the current time and reset back to 2002. I would then manually reset the time using ntpdate.
After a few times, I got tired of this and decided that there is a better way. Since the netbook is connected to the Internet most of the time, I knew that a script could be triggered to run every time the network card started up. All I needed to do was to add the ntpdate command and options to it. Problem was I didn't know what script it was. I wasn't also big on making a custom change that would affect future updates.
I knew the scripts that set the network configuration was in /etc/sysconfig/network-scripts. My network interface family was eth so the script that set it up was /etc/sysconfig/network-scripts/ifup-cfg. At the of end of the file was the command

exec /etc/sysconfig/network-scripts/ifup-post ${CONFIG} ${2}

Reading the /etc/sysconfig/network-scripts/ifup-post file, I found the following command at the end.

if [ -x /sbin/ifup-local ]; then
    /sbin/ifup-local ${DEVICE}
fi

However, the file /sbin/ifup-local does not exist. So I created one with the ntpdate -u pool.ntp.org command in it.  So now, every time the network is configured, the time is correct.
I know it doesn't address the problem of the battery being dead or needing replacement but it'll do for now.

This didn't work for you? Maybe you're using NetworkManager. Run NTPDate automatically with NetworkManager too.

Enhanced by Zemanta

Wednesday, September 11, 2013

Reveiw of Blogger for Android 2013

Updated September 2012.
I am posting this on Blogger for Android. I am happy to report that it is a better app than the previous version. I am also not happy to report that Google has delivered a product that seems to be uneven and confusing to first time users, again.
There are many things to like about this app. All of your Blogger blogs and their posts are immediately accessible. It is faster and more responsive. Switching between blogs is almost immediate. It does feel like the Blogger App is pulling the RSS feed of the blog instead of pulling it directly from the Blogger system but it still works quickly. (Which should help explain the problem I had were the Blogger app just stomped over a published post).
This is icon for social networking website. Th...
(Photo credit: Wikipedia)
If you are a Blogger user and need to note ideas for posts and work on drafts, this is what this app seems to be built for. However, since there is little or no layout formatting options, sites whose design language calls for a specific layout (e.g. each post has to have a related photo on the left side of the post), will find that final posting will still have to be done at the PC. Which is odd is because one of the highlighted features is the ability to include photos taken from the phone's camera. 
In fact, if you are thinking of using Blogger for Android as part of your blogging workflow, consider this advice: opening a saved or published post is a multi-step processs. First, the app will show the complete post, including labels. Then you need to select or activate/touch the post text to edit it. The good news is that it now hides the HTML formatting and show the complete formatting. 
Post settings are still missing from the Blogger App. But handling labels has improved. There has always been the ability to add labels to the post edited but now when you start typing, you get a list of previously used labels. Which is really useful if you have many labels and can't remember which one to use off the top of your head. There is still no Schedule option to schedule when you post will be posted. This makes using the App to post on Blogger seem like an all or nothing proposition. And knowing other Blogger users, they will miss the powerful SEO-friendly option of providing a search description because it still isn't available.The Blogger App for Android (2013) is now good to use for general blogging but leans towards immediate posting.
It could be that Google is now leaning towards Google+ a blogging platform. However, Google+ is still missing a killer feature: to customize the layout and therefore the user experience. If you blog using Google+, your posts is just a post in a stream of other posts. You can set up Google+ Pages but your audience will mostly come from reading your post in their stream as opposed to going to your specific Google+ page.
Also is missing is a way to monetize your Google+ posts or page. This could mean people who do make money from Ads will not be interested in posting in Google+. They may post publicly there to reach a bigger audience but it'll just link back to their Blogger site. It may not be a big revenue generator for most but some prolific posters will find posting solely to Google+ to blog, a turn-off.
Again, download the Blogger App for Android because it is great for capturing ideas and first-drafts. It is also increasingly becoming an equal companion to the Blogger site. 
Enhanced by Zemanta

Tuesday, September 10, 2013

TFS Internet Gateway: The way to connect commercial mail systems in the mid 90s

TFS Brochure from Australia
Back in the mid-1990s, I was connecting businesses to the Internet. I was selling software to connect people to the Internet. This was back in the day when you had to buy software to connect to the Internet. It was even pre-Windows 95 times. What many people don't remember was that Microsoft wanted to charge extra for software to connect to the Internet. They called the add-on pack Windows 95 Plus Pack. I was selling a software suite called Internet-In-a-Box which competed directly with them. Truth be told, there were free software available to connect people to the Internet but this was early days and you had to have skills (like editing a text file) to setup things correctly. Strange thing was that I was still selling Internet in a Box pretty well even though competing with Microsoft. The Box had more software. Internet In a Box's big brother package, even had a sweet TN3270 terminal emulator. But I found out that people were buying it because the setup was easier and we provided phone support. Try getting that from Microsoft back then.
My boss and I saw the writing on the wall and we started to diversify. We were talking to a lot of companies that were interested in the Internet. But most of them were interested in the Internet as a resource not for communication. It's not like they didn't have e-mail systems. They did have e-mail systems for internal communications but they were built just for that: internal communication. Most of the systems didn't have the optional module of connecting the e-mail system to the Internet.
A little work on the Internet and I found this unique product from Sweden called TFS Gateway. They have been making a living building a system that allowed different commercial e-mail systems to talk to each other. They did this by using the mail system's API or mimiced a remote system via the e-mail system's remote gateway. TFS Gateway converted mail messages into a common format (pseudo X.400) before passing them on to their final destination. It supported Microsoft Mail, Lotus Notes and cc:Mail and Novell GroupWise. What interested me was that TFS Gateway also had a module that connected to Internet mail systems, specifically, an SMTP gateway. A market opened up as more and more companies saw the benefit of Internet mail communications and wanted to connect their systems to the Internet.

Friday, May 10, 2013

Will Ubuntu eventually go BSD?

At some point, I think, Mark Shuttleworth looked back and thought, 'I wish I chose BSD instead of Linux'. Imagine Ubuntu powered by BSD or FreeBSD instead of the Linux kernel. Crazy talk? Speculation? Definitely.
But the thought couldn't help cross my mind when I look back at what Ubuntu have been doing the last year or so. The trend is very clear. They are moving away from Linux and perhaps GNU Open Source.
First was the use of the term 'the Ubuntu kernel' instead of the Linux kernel. You would be hard-pressed to find the word Linux on the Ubuntu website or paraphernalia. I can't fault Ubuntu for maintaining brand prominence. But why at the cost of diminishing the Linux brand. Surely, Ubuntu is not ashamed of it's Linux core. Some people have pointed out that perhaps they want to distance themselves from their Linux heritage. To this, I point out that it is only a heritage when you are generation removed. Like Linux's Unix heritage. Ubuntu is still clearly dependent on Linux. In a way, I do see their point. Perhaps there are almost no Linux references in the website because they really want to prepare us for an Ubuntu without Linux. 
Then there is Unity. Maybe Ubuntu saw what I saw in the debacle of Gnome3. The sense was simply that the Gnome developers betrayed their user community. But instead of offering a safe haven for the majority that didn't agree with where Gnome is headed, Ubuntu saw this as opportunity to differentiate themselves even further. They created another environment, open source but definitely under their control. They were hoping that users will flock to that instead as an upgrade path from Gnome3. In the end, Unity didn't look much different than Gnome3, users faced similar issues and they even share design principles. Ubuntu just applied them to different parts of Ubuntu. And their goal is evidently the same, a touch-friendly, tablet interface. No matter how much the Gnome developers protest and claim otherwise, the proof is just in the result of their work. If fact, when taken from that perspective, both Unity and Gnome3 are really good. Problem is, most users still don't have touch screens. The Gnome developers may want their Star Trek dream to come true, most of us just want to check mail.
The biggest step Ubuntu has taken so far is to move their graphical display technology away from X Windows to something that they themselves developed called Mir. Let's get something clear first. X Windows has a lot of problems. It is very simplistic in nature, mainly because it was designed in the 80s. It even needs a separate program to manage the windows and make them move, maximize and do even the most basic functions. That doesn't even include what we normally expect from a GUI like cut and paste between applications. KDE and Gnome were built to fill the need for a graphical system that does more. But at the core is the fact that X Windows offered cross-platform compatibility. I remember selling linux boxes as replacements to expensive Sun and HP graphical workstations. The X Windows applications still used the powerful CPUs of the servers while the workstation only busied itself with managing the GUI. Ubuntu moving to Mir breaks this compatibility. They had originally planned on using Wayland to replace X. Wayland respected X and offered a way to coexist and interact with X. Mir doesn't seem to care about that. What it also means is that future Ubuntu users can't share their applications with other Linux (or even Unix) distros and vice versa. But that is only good for Ubuntu because it create a lock-in. Ubuntu say it really wants to build a graphical display system that could be used for both desktop and mobile platforms. If it locks in their users and makes applications written for Ubuntu exclusive to them, then what downside is there for them?

Monday, February 04, 2013

Going Minty: LinuxMint 14 MATE

I am not an Ubuntu fan. I stand firmly on the RPM side of the fence. Not for any one particular reason. If I could pinpoint it, it could probably be the pain I felt installing Debian for the first time. And the glacial pace of it's development. This from someone who began installing Linux from tgz files. So I shied away from anything deb based.
I also particularly loathe the fact that Ubuntu focused their efforts on the desktop (they were roundly criticized for not contributing to the kernel at one point) and then glorifying themselves as the Linux distro for everyone. They committed what many geeks consider a cardinal sin, put their name above that of Linux itself. In fact, while other distributions were calling themselves Linux-this or Something-Linux, Ubuntu decided that their brand was to be put forth in place of Linux. And that effort has worked. Ask most users who have heard of Linux and most likely they have heard of Ubuntu. In fact, I used to find people who know about Ubuntu and not heard of Linux. Worse off, they knew nothing about other Linuxes other than Ubuntu. "What's RedHat?". And when they started to refer to the kernel as the Ubuntu kernel, I was convinced my position to not support them was right.
But all that changed because of Gnome3. There is no secret how I hate Gnome3 and how it represents the Gnome developers attitude to their community of users. I have a problem because I have been using Mageia, having followed the community there after the split with Mandriva. And they have decided to focus on the KDE desktop while offering Gnome3 as an option. A lot of people think that in the past, Mandriva is a KDE distro but in reality support has been equal for Gnome. In fact, the pervasiveness of the Mandriva/Mageia Control Center makes the decision style choice more than a technology one. I am writing this on XFce on Mageia, which I consider a viable alternative, although at times it feels like a downgrade.

Thursday, January 24, 2013

Microsoft's invests in Dell to cut off Linux?

Update: It's official. Microsoft has a hand in taking Dell private to the tune of 2 billion. Is this a loan like the 'investment' was in Apple?
Dell is considering taking itself private. This is nothing odd, most successful companies with a large cash pile will consider buying back their public shares, taking them private. They would then do business as usual for a time before taking themselves public again, this time at a higher value. If you are interested why, there are some possible reasons why explained at the end.
So while Dell considering taking themselves private is not really interesting to technology watchers, the rumor that Microsoft and Silverlake Partners is reported to be involved in the buy back of those shares and coming in as investors is interesting. So why now? And why Dell? Microsoft has not been successful with hardware (other than their mouse and keyboards). Xbox is a success but it's success comes from games software and subscription services rather than just hardware alone. Case in point, the Microsoft Tablets and the Zune. So the question is what would Microsoft gain from having a say in the running of a PC manufacturer.
This could be another front Microsoft is pursuing against Linux. Dell made it no secret that they were investigating the viability of having Linux as a desktop OS option. The well reviewed Dell XPS 13 Developer Edition or Project Sputnik and regional versions of laptops pre-loaded with Linux are examples of their efforts. The adoption of Linux as a Desktop OS option by a major PC manufacturer would spell big trouble for Microsoft. While Dell has offered several Linux OSes such as RedHat and Ubuntu as options for a server OS, this has not been a real threat to Microsoft. This is because of the use of Linux on servers has a low consumer profile and the fact that Microsoft makes more money on the connection licences on the server than the sale of the server OS itself.
Another interesting twist to this story is the involvement of Silverlake Partners. While many know of their collaboration with Microsoft on the Skype deal, the people behind both companies share a more darker past in relation to Linux. Both are entangled in some way with SCO as it was launching it's legal battle against Linux.
Is there something more to this?

Tuesday, January 22, 2013

Blogger for Android review: Getting better

Google released a new Blogger for Android and I'm happy to report that it is an improvement over the past version. I blog from everywhere. I used to use the Springpad App to capture the initial ideas before forming them into posts. Then, it would be a cut and paste away from being posted via the Blogger web interface. Then, I switched to the previous version of the Blogger App. I used it at every opportunity I could. Then I hit a bug. I could live with idiosyncrasies of the app but when it became destructive, I ditched it. But before that, I had used almost all of the features it offered, perhaps working it enough to find that bug.
The best new thing about this version is the way it allows you to manage posts. In the past, it used to be in one long list. But now, you can view them separately. You can safely view and edit  drafts and not have to worry about accidentally opening a published post.
I reviewed a more recent version of the Blogger app for Android here.

Friday, January 18, 2013

Is Facebook Graph Search promoting insularity?

While I agree the science of search needs to be further developed, I don't think that Facebook Graph Search is a step in the right direction. Basically, Facebook Graph Search uses information that users have shared previously and then uses them as weights in searches. To generalize, we can search in stuff shared by other people who are connected to us. We can do perspective searches, searches that give results based on your perspective or scope or visibility of other people's Facebook data.
This isn't new. This search feature was previously available to advertisers and used to target ads to Facebook users. Look how successful that was. But that is not the point of this new feature. It's point is to add more data to Facebook. Specifically to find out what is the context or importance the specific shared or related Facebook data. With each search we do, we provide Facebook with more information about us. With each graph search we do, we provide Facebook with more information about that data that we have in common with the people connected to us. This is a mother-load of information for advertisers. They  constantly want to know the things that interest us so that they can lead us to them.
This used to have hilarious outcomes. That one-off search you did for your mother on recommendations for skin rash cream? Facebook associates that search with you. Were you peppered with adverts for hemorrhoid cream and mature relationship/dating sites? Facebook can't be blamed totally because that is all they have on you. They need more data to make your future searches more relevant. Maybe with this new data, Facebook will be able to differentiate on-off searched from searches that you regularly do. Perhaps, Facebook will notice that you haven't searched for skin rash cream in six months and stop sending you hemorrhoid cream adverts. If it was too smart, it would then assume the worse and start sending you adverts on will generation and estate planning. If you decide to stop using Facebook out of disgust, maybe Facebook will take this data, assume the worse and start sending your family members ads for caskets and funeral homes. Ouch.
How about it if Facebook uses the data it gathers from Graph Searches to target the ads for people around you? Will you start seeing messages that begin, "Which one of you guys searched for hemorrhoid cream?"

Thursday, December 06, 2012

Xperia Mini Pro with Android 4.0

I had read somewhere that Sony were going back on their promise to make Android 4 or Ice Cream Sandwich available on my Xperia model, the Xperia Mini Pro. This is the small phone with the slide-out keyboard. And since I was mainly running Linux, I didn't bothered to install the Windows-based desktop companion software. 
However, the phone running Android 2.3 had been acting strangely, slowing down mid-app and losing sensitivity on the touchscreen when the battery was below 50%, which was quite often. The last straw was when the home screen started crashing. Of all the things I'd expect Sony to put care into was their home screen app. It was their own software. It is the main thing that all top brand Android phone makers have to differentiate between each other. So I booted into Windows, ran Windows Update (which took a long time because I hadn't booted it up for a long while) and removed the old LG desktop companion software. Several reboots later, I plugged the phone in and installed the companion software without a hitch. I thought may be there would be a patch or something for their home screen software. I was surprised to see the Android 4 update available. I reckoned what the heck, I had time to burn.
Contrary to my experiences with the LG P500 Optimus One and it's companion software, the Sony Xperia version  was very easy to use. It was clear that Sony called upon it's decades of consumer appliance experience in designing the interaction process of the software. It was download, plug-in the phone, wait for it to restart and Android 4 booted up on my phone. 
Well, not exactly. There was one hitch. The software installed a driver while it was trying to communicate with the phone or Windows installed a driver just before the software was about to be uploaded to my phone. Windows told me to reboot to activate the driver which I didn't. I went ahead to try upload the new Android OS. Anxious minutes passed and nothing happened. Finally, a window popped up from the Sony companion software asking me why was it taking so long. It assured me it was safe to unplug and replug in the phone. I did so and still nothing happened. Finally, I decided to follow what Microsoft told me to do and rebooted the PC. I went through the process again and it uploaded the new OS and rebooted in less then 30 minutes. Start to finish, it was slightly less than 2 hours, much of that spent on Windows update and rebooting. Which was much better than the two day horror I went through with the LG P500 upgrade from 2.2 to 2.3. That story will have to wait for another time.
That was about 2 weeks ago. What's the verdict?

Monday, October 29, 2012

Insert a Table of Contents in a Google Doc with LibreOffice

In another post, I told of my troubles with LibreOffice and Outline Numbering and how it stopped me from setting a table of contents in a file I downloaded from Google Drive.  Finally, I successfully inserted a table of contents into a Google Docs document using LibreOffice with the correct page numbering. It's not hard but somewhat tedious. There are basically 4 things you need to do.

Step 1: Setup the document.


Download the Google Doc document as an LibreOffice/OpenOffice .ODT file. It will likely download it into a temp file. Normally, you can't edit that. So save the file somewhere else to allow for editing. Move the cursor top of the page.
Insert a manual break and change the type to Page Break. Then set the Style to be Default. Now change page number to 1.
Again, move the cursor up to top of document.  Press F11 to bring up the Styles and Formatting window. Select the Page Styles icon from the top row and double click on the First Page style.
Just to check, move the cursors between the top of the document and into the other section and back up and check the name of the section on the status bar at the bottom of the window. The top one should say First Page while the one below it should say Default.
For a better way to do it, scroll down to the comments sections and check out Julian's way of doing Step 2 onward.

Step 2: Correct Outline Numbering styles
Click on Tools - >  Outline Numbering to bring up the Outline Numbering window.
Select Level 1 from the list on the left. In the Paragraph Styles section, choose Heading 1.
Now select Level 2 from the list. Under Paragraph Styles, select Heading 2.
Repeat the process for all levels that have an empty Paragraph Sytle.


Thursday, October 25, 2012

A tale of Tables of Contents, Google Docs and the Outline Numbering Monster

I love Google Drive. Specifically, I love Google Docs. It gives me what I always wanted. A word processor on demand, whenever, wherever I need it. As long as I'm near a browser. On a PC connected to the Internet.
Both of which is becoming more common by the day.
To tell the truth, I never really pushed Google Docs. I never really asked too much out of it because I never really needed more than a simple word processor. Have you ever been to a Microsoft Word class where they say that 90% of Microsoft Word users never really use more than 30% of it features. I'm one of those 10% of Microsoft Word users. There is not much I haven't done on Word. Frames. Pictures. Multiple Columns. Sections. Cross-references like footnotes and endnotes. Renumbering pages. Hierarchical documents, where a document is made of many documents that other people are writing. I've even done some marcos and VBA. Did you know that holding SHIFT and selecting text with a mouse will select a square block of text, not lines of text. Did you also know that you could set the background of the editing areas to blue. It was a feature to make WordPrefect users feel at home.
Despite all that, I've never asked a lot from Google Docs. I did ask much perhaps because I was still in awe of the ability to have word processor in a browser. A browser!
That changed recently when I did a proposal for a client. Working with other people in different geographical locations in a single document is what Google Docs was made for. No sending files back and forth. No problems of updating different versions of files. Once the document proposal largely completed, it was time to prepare the document to present to the client. The group had been careful to use the proper headings at the correct level. So all that was left was to generate the table of contents and the cover page. I had resigned to the fact that the cover page was going to be in another document. This was because I couldn't put my head around how reset page numbers in Google Docs. Then, when I went ahead and inserted the table of content, the table of contents didn't have any page numbers.

Monday, October 22, 2012

How to make a PDF for free with Ghoscript

I few weeks back I faced a problem with PDFs. I needed to combine several PDFs into a single PDF. The solution was to use Ghoscript. (I later found another tool that could do the same). This brought back fond memories of ghostscript and how it introduced me to the concept of "printing to a PDF".
At one time or another we've all been asked how do you make a PDF file. The natural reaction would be that it would require Adobe Acrobat and would cost money. A lot of money for something so trivial. This isn't a problem on Linux because the ability to print to a Postscript file has been around for a long time. Run the file through the ps2pdf program and your done. Nowadays, you don't even have to do that. You can print to PDF straight from CUPS and some programs like LibreOffice even offer the ability to export directly to PDF.
I'd like to share with you how to "print to a PDF" on WindowsXP or even Windows7. Basically, it's creating a PDF file by printing. Except that instead of printing to paper, it becomes a PDF file. This opens up tremendous opportunities. First, any program can create a PDF. So long as it prints on Windows, the program can be used to create a PDF, sort of.
The tool that makes this possible comes from the makers of Ghostscript. Its called RedMon. It redirects the output of a printer and feeds it into another program on Windows. Basically, it takes the output of printer, instead of printing it on paper, and gives it to another program for further processing. This has many uses, if you are creative enough. But it's most useful if you want to create PDFs with ghostscript.
There two ways to do this. The first way is to install Ghostscript and RedMon, create a few files and configure a new printer. It's not terribly technically complicated. The instructions to create a PDF printer using just Ghostcript and RedMon is very clear.
The second method is just as clear although a bit shorter and requires one more program called MakePDF.

Thursday, October 18, 2012

MSI CR650 Review with Linux

Update: I've given up on the proprietary ATI drivers. Read on to find out how to remove the ATI proprietary drivers.

In my previous post, I installed the Ubuntu-powered ZorinOS 6 on my friends notebook. That notebook is the MSI CR650. I've been able to kick it's tires for a while now and I'm sort of impressed.

Let's get one thing straight. This is no screamer. It is an AMD Fusion E-240 CPU powered notebook. It runs at 1.6Ghz and incorporates the AMD Radeon HD6310 GPU which gives it better than average performance than what you would normally expect for a notebook within it's price range. At 2GB of RAM, it is a bit cramped for Windows but great for Linux desktops.
If the specifications look odd, don't worry. Apparently, different regions get different specifications but since the difference is only in the bundled OS (or not), CPU and memory config, MSI didn't bother changing the model number. Minor things, you know. It's the chassis that matters. And if you are buying this variant of the the CR650, you are better off with a Linux distro.
Since ZorinOS 6 is essentially Ubuntu, you can equate my comments to any other Ubuntu variant like Mint or Ubuntu itself.
Most everything runs out-of-the-box. Just keep to the mantra of Install, Update and Reboot before doing anything else and you should be fine. I panicked when the wireless didn't seem to work at first and tried to fix it before the first big update. Wrong. Just let it go, do the Update and Reboot. And then judge.
My advice to anybody buying a notebook, go and buy it yourself at a shop. Pick it up and feel it's weight. The CR650 is a bit on the heavy side. I blame that mostly on the larger-than-most 15.6" screen. Running at max resoution of 1388 x 768, the screen is a beauty, a heavy beauty. The batteries do contribute but not normally more than other notebooks.
The full keyboard took some getting used to. I've used a similarly large HP with a full keyboard before but the CR650's keyboard posed some challenges. Mainly it's keys are not full size, just slightly smaller. Worse, the right shift key was cut in half to make way for the direction keys, which meant I pressed the up arrow key a lot. While the numeric keypad would be useful for someone who needs to enter numbers by the truck-load, I would rather had a full keyboard. Plus, the location of the PgUp and PGDn keys need getting some used to. Special Fn keys all work with the exception of the Eco key which is supposed to change the power usage profile. I don't know whether it works because there is no feedback. Dmesg is silent. Even if it works, I couldn't see any difference. There is a slight lag when pressing the volume keys but not much. The top row special keys next to the power key are all user configurable so you can assign them.
There were some Linux-specific issues faced.

Tuesday, October 16, 2012

Modify a PDF


I received a comment after my recent post on how to combine PDFs. The result was that I was reminded of another PDF tool called pdftk. It's a command line tool that does a lot of things on a PDF files. You can extract pages, burst the entire PDF into individual PDFs and yes, merge multiple PDFs into a single PDF. So, if I wanted to do the merge the same documents with pdftk, the command would be

pdftk source1.pdf source2.pdf source3.pdf cat output merged.pdf

PDFtk can also encrypt or decrypt a PDF. That means putting or removing password protection.
It can even insert a watermark or a stamp. The difference being is that while a watermark is an image underneath the document text, a stamp is an image or lettering on top of the document. An example for this is  a stamp marked "NULL AND VOID" on a document. But that it not the strangest thing it can do.

Monday, October 15, 2012

Blogger pageviews reset to 0

Update: By all accounts, the pageview count has been restored. No official explanation from Blogger yet. 

I loaded my Blogger dashboard today and saw that all my blogs has their pageview count reset to 0. All of my blogs on Blogger had a pageview count of 0. Nada. Zip.
Yet I am not surprised. There are several reasons why this could have happened.
Let's first get out of the way the idea that crackers got into the Blogger system and reset all pageview counters for all of the blogs. They are several reasons why they would do this. First is for the ego-boost. Defeating Blogger means 'breaking into Google'. They might even enjoy thinking some of the brainpower at Google is now aiming in their general direction. They could also be possibly some Wordpress closet fans offended that Blogger is still around when 'the rest of the world has gone Wordpress'. Joomla and Drupal users, you're next. Perhaps these crackers were upset with Blogger. Why? Read on further down towards the end.
This could also be part of some spring clean-up gone awry. Blogger is known for some problems with dead and old blogs. Did you know that when you create a blog and delete it, the name of the blog can never be used again, ever. If you created myfluffycat.blogspot.com, then deleted it because you had a fight with your cat but later wanted to re-create it after both of you patched up, you can't do it. So Blogger is  littered with dead 'space', names of blogs that were deleted and can never be re-used. It's probably time Blogger did something about it. And probably someone deleted one file too many.
Most likely this is the result of Google's response to efforts of gaming and spamming Blogger. I'm not just talking about backlinks buying or blogging groups that commit to visiting each others site a few times a day and clicking on ads. I'm talking about some serious efforts by crackers to pry into the Blogger system. The end game is likely the automated insertion of content into user templates. The content could be links to viruses, drive-by attacks or just phishing attacks. Drive-by attacks are when a malicious program is automatically downloaded when someone opens a web page. They don't have to click anything. It just does it automatically. Phishing attacks is when on a user is given a web page that tries to convinces them to part with valuable information like bank account PIN numbers. Both of these attacks are possible by inserting code into the user template. Now imagine what would happen if they inserted the code into all of the user's blog templates. Or the master template files that is used when you create a new blog.
I've also seeing a form of spamming on Blogger involving referring URLs. A Referring URL is the URL of the site the browser was previously on before it loaded the current URL. It's a fancy way of saying 'the site I was on which had a link to this site'. So, not only did the site had a link to the page or blog, someone actually clicked on the link to get to the page or blog. But since this is reported by the browser, it could really be anything. So some clever souls have been reporting spam URLs as the referring URLs. When a blogger clicks on the links to check who has been linking to their site, it will bring them a spam page or something even more sinister.
If Blogger has done something to prevent that from happening, that might have pissed off some crackers. More likely so if they were making money off from it. So breaking in and resetting everyone's pageview count, and making bloggers everywhere pissed off at Blogger/Google, seems to be a measured response.
It if was them at all. For all we know, it was Blogger who reset the pageview count on purpose. Anybody running Google Analytics on their site knows what I am talking about. The pageview count and the numbers reported by Google Analytics have long been far apart. But in the last few months, they have been growing further and further apart. So Blogger could have upgraded their code that counts pageviews to reflect numbers closer to that of Analytics.
Either way or any reason the pageview count is set to 0, I still love Blogger. I think it is the best platform for writers who are more concerned about writing. If I want to set up a blog on Blogger, I just register, create the blog name, set the template and start blogging. No plug-ins to set up or additional frameworks to install on top of the existing webserver system. There are a lot of nice template designs and if you don't mind losing some control, the dynamic templates offer an interactive experience to your  readers. That is why I have several blogs on varying topic (and varying degrees of updating).
Real bloggers would just shrug this off and go back to thinking of more ways to drive traffic to their blogs.

Thursday, October 11, 2012

How to combine PDFs

Recently, I had to figure out how to join pdfs files into one. This used to be something non-trivial. Nowadays, you can use SimpleScan to scan in documents and create a multi-page PDF. But a few days ago I found myself on an older Mandriva PC with a scanner and the need to create a multi-page PDF. I scanned the pages of the documents with trusty old XSane.  Now I have the pages individually. I was thinking of something clever like opening up Scribus and pasting each image per page. I was also thinking of pasting the images in a OpenOffice document but the images would shrink too much. I gave up thinking like a Windows user and looked at the problem in it's most basic form. I could print the images into individual PDFs but then I would need to combine them together. I was thinking along the lines of printing out in postscript and then concatenating the files together. Then convert the resulting postscript file into a PDF, which is trivial.
Finally, I decided I wasn't that smart and asked Google. I found the answer here, in a Macworld article, of all places. Basically, I had to print out the pages individually as PDFs, which involves setting the printer to print to file. Then I use good old Ghostscript. The command is

gs -q -dNOPAUSE -dBATCH -sDEVICE=pdfwrite -sOutputFile=merged.pdf source1.pdf source2.pdf source3.pdf etc.pdf

Replace source1.pdf, source2.pdf, source3.pdf and etc.pdf with the names of the files that I had printed earlier. Run the command and rename merged.pdf and I'm done.
Basically the command takes the list of pdf files as input and 're-prints' them out as one pdf file using the printer definition called "pdfwrite" which is a basically generates a PDF output.
Never underestimate the power of the command line tools.
Here is another tool I found to change PDFs.

Monday, October 08, 2012

The Elusive LibreOffice Title Slide

Warning: this is a rant over 10 years in the making. I don't know why it doesn't bother other people so much. Maybe because they have given up. Maybe because every time I make an alternative choice. Isn't that part of the story of open source? Don't like something? Make another choice or fix it yourself. The "scratch your own itch" thing.
For me, this has been the StarOffice / OpenOffice / LibreOffice Impress Title slide.
For those who have not noticed, the title slide for the presentation software is not a real title slide. It's not a real title slide because it does not have a separate, different background from the other slides. It also does not have a different layout than the other slides. Perhaps because it doesn't have a different layout, it also does not have a different format scheme. Which makes the Impress 'title slide' nothing more than a normal slide without the content body. 
Don't even try to point out to me the 'Centered Text' layout. That is just a content slide without the title. Plus, changing the format in the Master Slide does not effect it. Which could mean that it is on par with a title slide. Except that you can't have consistent title slides because the format of the Centered Text layout must be changed individually. Which is fine for a 10 slide presentation with a single Centered Text/Title slide but not for anything requiring 3 or more Title slides.
You can create another master and move the title on the title master to where you want to. And even change the background to make  it you unique. But you have to change back because when you add a slide the title of the next added slide will be where you just moved it. That is even more kludgy than the Centered Text layout. 

Tuesday, September 25, 2012

When in doubt, Webmin

I was listening to the Linux Outlaws podcast. Well catching up more likely. My time has been quite filled up lately and I'm behind on my listening to podcasts. I don't listen to a lot but they are just so deep and the discussion that go on just spring my own opinions and ideas. Some of it, those that are better formed, end up here.
I was listening to the edition that was done right after OggCamp. By all accounts it was brilliant. Well, except apparently there was some problem with food for the volunteers and that there was a mixer with some grannys. It's hilarious and you should listen to it yourself.
But what piqued my attention was Fab's issues with setting up a DHCP server. He had some problems and that although there were loads of Linux people around, most of them couldn't help him. Not that they didn't want to. But it's because they weren't Linux server people. They were Ubuntu users but mainly on the desktop.
Now I'm a server guy. Or so I keep telling myself. But I was wondering what would I do in Fab's situation, given that I have some basic ideas for what to do. Or what would I tell someone in that situation.
Well, there seems to be only one sure thing to do. Install Webmin. The dependency is Perl and no distro worth it's salt does not have packages for Perl. Perl was the PHP of it's day. And for most of the stuff that you want to do with Webmin, the standard perl packages would do. Even then, if you do need them, Webmin is smart enough to suggest to you what to do or ask permission to do it itself.
So if you are ever caught having to so anything server-like on a Linux box (or Solaris box for that matter), just grab Webmin from the distro's repos or the webmin site itself. It supports many languages and has add-on for all sorts of things. But even with the standard Webmin, you could do almost all of the daily admin tasks. I do recommend installing it and gaining some familiarity. Help is uneven with some modules having excellent help while others barely have any. Install it even on a desktop machine, it'll work. There is almost no difference in the basic OS for a desktop and a server.  So give it a try. It's not the first time I've thought webmin is great.
Enhanced by Zemanta

Saturday, September 22, 2012

Kicking the tyres on Ubuntu ZorinOS 6

A friend of mine asked me to install Linux for him. Now, I get that request a lot but this friend of mine isn't your typical PC user. In fact, he had 'graduated' from Windows XP to the Macintosh recently and was facing some other problems on that platform. It wasn't too hard for him, it just required for him to think in a different way.
He wanted me to convert a low-end laptop he bought so that he could try Linux out and eventually send the laptop to his parents. He understood the problems Windows can be but wasn't sure how his parents would deal with a Macintosh. He chose Linux because he has seen me lock down a PC using Linux and was keen on reducing the support calls from his parents. Why not a tablet, which seem more appropriate given their design is to limit the user to one app at a time? His parents understood the PC and would find this slab of glass too futuristic to deal with. Like my parents.
Since my friend was going to do the support himself, I had to choose something that was easy to support and support would be available easily to him. Plus it had to look cool. My go-to distro is Mageia. It's is extremely easy to support but Mageia2 with Gnome3's spectacular dive into utilitarianism, the looking-cool factor is gone. KDE was tempting but I've been down too many rabbit-hole support calls with KDE when the user tries poking around with the settings. I'm sorry but while I love KDE's customability, it's not for the newbie who may be overwhelmed with choice.
I looked around and finally, settled on ZorinOS. On paper, it was perfect. Powered by Ubuntu means that a lot of resources are out there (people and webpages). But it also has the cool factor down to a pat. So I downloaded the ISO, loaded it on a USB disk with unetbootin and loaded it up on the MSI CR650 laptop.
Which disto had the first graphical installation tool? I'm not sure but I was using OpenLinux in 1998/9 (before Caldera was bitten and became evil) and it had an graphical installation tool. This means it has been around a long time and that by now, all the major kinks should have been worked out and all we have to look out for are the small stuff, the details. Well, I'm not sure why, but after installing all the files, ZorinOS wanted to install Grub on the USB disk. "No matter," I thought, "I'll just change it so that it'll install it in the right disk." Well, the place where to set that setting was the same screen as setting up a custom partition scheme, way back in the beginning. So I had to set up the custom partition scheme in order for it to install Grub in the right place. I deemed this the lesser evil than dealing with which version of Grub was I looking at and the non-standard device numbering it has.

Recently Popular