Monday, November 29, 2010

Flickr API

This falls into the category of both work and fun.  Many of my clients use an online host to manage their photos. My personal favorite is Flickr. Whether you’re a professional photographer looking to expand your market or show off your latest, or you’re in a sales/marketing/promotion role and sharing pictures from the latest trade show, or simply have a need to share more than a handful of digital photographs, a photo-specific hosting site makes the job much easier and allows you to focus on results rather than the process.

Flickr provides the ability to easily embed photos in other applications and a highly extensible API for extracting and using photos for your own customized needs.

Before you begin creating scripts, you’ll need a Flickr API key.

From your account page, click on the “Sharing & Extending” tab.

Scroll down to “Your API keys” and click on the link to the right, which likely says “You have no API keys assigned to this account.”

Click on the “GET A KEY” button.

For now, we’ll stick with non-commercial applications, so click on “APPLY FOR A NON-COMMERCIAL KEY”.

You’ll need a name and description for you app.  For now, we can name it “Test Key” and give it a similar description. After reading the terms of use, confirm and check the boxes and click “SUBMIT”.

You’ll be given two hex strings, a “key” and a “secret”.  Since we created non-commercial keys, save these and don’t share with anyone.  You’ll want to copy/paste someplace for quick reference, but you can always retrieve them from your account page if necessary.

With your API key, we can now begin writing a script.

Once again, Perl provides a readily available repository called Flickr::API.  Use cpan to install:

cpan> install Flickr::API

I’d also recommend having Data::Dumper available.

cpan> install Data::Dumper

The first script will simply confirm your perl module is working properly and your API key is functionally.  Fortunately Flickr provides a test method.  Make sure to replace your_key_here with your Flickr API key (not your secret).

# C:\Perl\bin\perl.exe

use Flickr::API;
use Data::Dumper;

my $api = new Flickr::API({'key' => 'your_key_here’});

my $response = $api->execute_method('flickr.test.echo');

print "Success:\t$response->{success}\n";
print "Error code:\t$response->{error_code}\n";
print "\n\n\n";
#print Dumper ($response);

For now, leave the last line (Dumper) commented out. Running the perl command should give you these results:

R> perl Flickr-test.pl
Success:        1
Error code:     0

If the results are flipped (success: 0 and error: 1), there’s a high probability your key isn’t correct, or there were problems with the Flickr::API module.

Tuesday, November 9, 2010

Standards

We are regularly called on to report on the latency and packet loss to a remote location.  We allow the perl script to schlep the data into a csv file, which then gets exported into Excel to produce a graph.   Excel has great capability to create charts on-the-fly, edit, and annotate. But consider these two graphs, charting exactly the same data set:

 

image

image

 

The graph on the bottom produces a much dramatic display of latency, and washed out the packet loss.  The top graph emphasis the packet loss, while reducing the impact of the latency.  Both are accurate, but the tell a different story.

Typically, charts aren’t viewed in a vacuum, but are rather compared against other charts—charts of other locations, or historical charts.  For this reason, it’s important we compare charts of similar scale.

Not only must the axis be the same, but the proportions of the X and Y axis should be similar.

As part of the post-mortem templates, or performance report template, include graph standards: latency maximums, time-scales, and graph sizes.  It will make it easier for everyone to compare events as needed.

Friday, October 1, 2010

Cybersecurity Awareness Month, 2010

The cyber threat has become one of the most serious economic and national security challenges we face. America’s competitiveness and economic prosperity in the 21st century will depend on effective cybersecurity. Every Internet user has a role to play in securing cyberspace and ensuring the safety of ourselves, our families, and our communities online.

http://www.dhs.gov/files/programs/gc_1158611596104.shtm

Wednesday, May 5, 2010

Browser Acid Tests

Since I've been working at a hotspot ISP I've taken an interest in browser compatibility. We require the completion of a registration page before users can use our service. If users are unable to complete the registration because of a browser compatibility issue, it's a loss of service for the user and a loss of revenue for our company.



This has led to understanding browsers at a deeper level and their rendering of HTML/CSS standards. Strict enforcement of browser standards can be tested via the AcidTest website.



However, strict enforcement of standards does not mean a better browsing experience. With well over 50% market share, most websites are designed to work with Internet Explorer first and foremost.



You can access the testing site directly at: http://acid3.acidtests.org/



Firefox 3.6.3 gets a respectable 94%:


ScreenShot 10-05-05 10-43-18001.png



Chrome 4.1 gets 100%:


ScreenShot 10-05-05 10-43-13001.png



Opera 10 also gets 100%:


ScreenShot 10-05-05 10-43-15001.png



I.E. 7 produces a completely unusable rendering:


ScreenShot 10-05-05 10-43-10001.png

Tuesday, January 26, 2010

When is a graph not a graph...

I continued to be surprised at the number of times a group of people can look at the same graph and come to, not only different but, directly opposing conclusions.



Our department was recently contacted because one office was experiencing "slower than usual" transport speeds. Since my client heavily depends on transferring files betweeen offices and between client, a report of slow transfer speeds gets shoved to the top of stack.



I asked what speeds they usually experience, and what they are experiencing now. The response consisted of simply this graph:




image001.png



So...do you see the problem? Me neither. There's not even a label on the Y axis as to what this graph represents. It's certainly not megabits/sec, or even megabytes/sec. Possibly bytes/sec. So I called backed to ask and found out it's the number of files transferred per hour.



Now that we have that settled. It actually appears that we have recently transferred more files per hour than in the recent past. So another call is placed to clarify the perceived slowness. "It takes longer." "You're transferring twice as many files." "It shouldn't take this long." "How long should it take?". "It should be faster."



At this point I needed to prove (at least to myself) that there was definitively no network issue or--if there was--to uncover it.



At first glance, a chart of bandwidth utilization didn't reveal anything telling. There were no errors, no buffer allocation problems, latency was well within tolerance. However, one noticable artifact is that the bandwidth seem to be stair stepped. Many lines peaked at 0.5 mbps, with several more around 1.5 mbps, a few lines at 3.5 mpbs, and none more than 4 mbps. Since this was a dedicated T3 circuit, I would have expected more random spikes.



201001-OR-Bandwidth.PNG



Since access to the far end was limited, we could only test in one direction. It was now time to dig into the application and how, exactly, were files being transferred. We found out (1) files are being transferred via FTP, (2) a script kicks off every other hour to send all files in a given directory, (3) a third script is kicked off on demand.



This is now beginning to make sense. With limited visibility on the far end, we decided to push a 1 gig file (hope they have space). Bandwidth raised to 2 mbps and platuaed. While that was running, a second transfer was started. Bandwidth raised to 4 mbps. We continued adding multiple threads, up to 10 consecutive 1 gig files were being pushed. The circuit climbed to an almost predicatble rate of 20 mbps.

201001-OR-Bandwidth-multistream.PNG



With this evidence we were able to contact the server own on the far end. Indeed, his server was limiting the per session throughput to 2 mbps.



But this really wasn't a lesson in technical troubleshooting. This was a lesson in investigative work. A nameless graph and seemingly contradicts the end-user reports. Armed with limited capabilities, we were able to diagnose the problem and, better yet, propose a new solution.



On to the next...