2011-05-31

Maintainability, Security vs Perfectly Deployed

I was part of an interesting conversation today. Specifically, a client for the day job wanted some feedback on their proposed push to the cloud. Their plan was architected by their main developer, but the client wanted our input since we manage their existing set of servers (an eclectic set of differently configured and versioned FreeBSD boxes and a CentOS 5 web server that we had migrated their main web presence (and store) to a few years back).

The developer's server layout was simple enough, and should cause no problems; Using goGrid's cloud offering, a set of web servers and app servers behind the included F5 load balancer (comes with the cloud), with a MySQL cluster on the back-end. Now, I'm sure it's going to start as just a single server of each type, if even that, but that's a fairly vanilla scalable cloud architecture, no surprises there. The problem is, of course, in the details. It even starts innocuous enough, the developer wants to run Debian on the systems.

Now, I don't have anything against Debian. It's a fine distribution, one of the big few, in fact. The suggestion I had though, was to use a corporate backed distribution in lieu of Debian. My reasoning goes like so; having market pressure to fix problems is a wonderful incentive to get important things fixed FAST. In this respect, for an enterprise deployment, I view Ubuntu (LTS) as a more fitting if a debian-like distribution is desired. When running a business, it's important to know that the company responsible for your server software is tied financially to providing the support you need.

That's simple enough, and hardly a point to get caught up on for anyone not trying to take a hardline stance on software freedom (a point which would be hard to defend for a company that develops and sells software as the client does, and that wasn't their intention). The real problem comes next. The developer wants to install a minimal debian root and manually compile all software needed into /srv along with the content. His reasons are simple, but in my opinion, very, very wrong.

I'll outline them here, and explain in detail why from an administration and maintenance standpoint, these are the wrong choices.

  • He wants the extra performance compiling a lean Apache with only required modules compiled in directly allows

  • He doesn't want to use a package manager, as it makes everything too hard to deal with. He would prefer a few hand written shell scripts to do building.

  • He wants more control over security updates, to only apply and reload services when the security problem affects them



I'm not going to spend much time addressing the specific incident that spurred this. Based on the conversation, I think this developer just needs some education as to what features are available. I'm much more interested in addressing the idea as a whole that a customized environment is much better than using what you get from the enterprise distributions. As such, I want to focus on addressing the generalities of these statements, and why they either don't hold true, or need to be weighed in context of both their positive and negative consequences.

Note: This is all aimed at small to medium businesses, where a large set of system administrators on staff is not a requirement nor desired. At a certain scale (or in certain markets), it DOES make sense to start doing your own system engineering to get a competitive edge. If you are one of those companies, this doesn't apply to you. If you don't know if you are one of those companies, then you either aren't one, or you aren't in a position to care


Let's start by addressing the idea that compiling a custom version of apache with components and features excluded will result in extra performance. First, this is true. My experience, common sense, and the developers themselves state there are compilation options that will result in a faster program. My point is that it's generally not worth it. If you are building a system and you expect load to increase, scale out, don't scale up. That is, design the system so extra nodes can be easily added, don't overly optimize each node yet. All optimization, especially early optimization, limits your future choices (or at a minimum makes certain choices harder than they normally would) in some respect, while enhancing the feasibility of others. This isn't conducive for the best solutions later.

The next item is about package management. The claim is that it makes everything too hard to deal with (the actual words used included "nightmare"). I can only attribute this towards lack of experience with package management systems. The RPM format, for instance, is quite flexible and allows for everything from compiled binaries, scripts, documentation or even an empty packages that just act as a node to pull in all the required packages for a complete subsystem of functionality. In an RPM spec file, you specify the requirements, the build process (how to unpackage, patch, build and deploy), and pre or post-install procedures, and a list of all installed files and their locations and types. This last bit is really helpful, as it lets the package manager know what files are considered configuration files, and how to treat them (backup existing and overwrite, place new file next to existing with new extension, etc). With this you get complete package file listings, file integrity and permission checks, easy installation and removal, dependency tracking, and a complete separation of your runtime environment from your content in a manageable manner.

Finally, there is apparently a desire by the developer to manage all security updates manually, so security updates don't negatively impact the production environment. This shows the biggest ignorance of what an enterprise distribution really provides, which is reliability. There's a few things to address where, so I'll spread it out into a few areas; security management, testing, and finally division of roles.

Something it seems many people don't understand is that enterprise distributions back-port patches to the versions of programs they shipped with so they don't need to change their environment. That is, if RHEL5 shipped with Apache 2.0.52, chances are it will stay version 2.0.52. Specifically, Red Hat will handle any security problems, and back-port any fixes from the version they were implemented in to the version they shipped. This allows for a stable environment, where all you have to worry about is that bugs and security fixes are dealt with, but all other functionality stays the same. API changes in newer versions? Removed features? New bugs? Not your problem. Note: In RHEL systems, some packages may be considered for new versions on point releases, generally every six months. You can rely on they updates to not reduce functionality, and to be compatible with the prior version in all aspects you may rely on. Additionally, instead of upgrading the version, they may choose to back-port an entire feature into the older version. These changes all go through extensive QA processes, and are deployed to thousands of systems, which happens to be the next point. In the end, what this really means is that you get security updates without having to worry unduly about whether they will cause a problem in your environment.

Testing is important. By rolling your own environment, you are saying that you believe you can integrate all the components, and test them sufficiently that you feel confident that the environment should perform as needed without problem. Most developers get this part right, as they know their core needs better than anyone else. The problem is on all the little changes. You need to re-certify this environment after any change. How many developers take the time to do this? It's drudge work, it's hard to find all the edge cases (much lest test them), and most of the time, it just works. Or appears to, at least. One of the single biggest advantages to using a pre-packaged binary with wide distribution is that any problems are likely to have been encountered before you, and they might have been documented and fixed already as well. You are essentially leveraging the QA, testing and pre-existing deployments of both the ditributer (such as Red Har or Ubuntu), and the thousands of companies that rely on them.

Finally, there's the division of roles. Should a developer really be responsible for tracking down, examining, applying and testing security updates? These should be fire-and-forget procedures that cause no worry, and nothing but a slight blip on the meters are services are restarted as needed. A developer should be building new products or supporting old products, not performing the role of a sys-admin because they decided they needed a little bit more control. That just means the company isn't getting their money's worth for the developer, as that's all wasted time.

In the end, this all boils down to a trade-off in terms of functionality for other features, such as ease of maintenance and security. In most cases, you don't want to skimp on the maintainability, and especially not the security. Those can be real liabilities for a company later. In most cases, it's much easier to become a little bit less flexible but much more stable and secure. And also, who's to say there's not a middle ground here. Some packages may very well be better running with cutting edge versions than are manually compiled, but that doesn't necessarily mean that EVERYTHING must be.

2011-05-27

PHP Saddens me.

It has for a long time. Unfortunately, I'm often forced to work with it. Often, in trying to explain my sadness (which often alternates to anger), I reach for the same few examples of PHP's brain-dead design. PHP Sadness does a good job of explaining my pain. I have personally griped about at least half the things on their list.

If you are primarily a PHP coder, visit the site for a good indication of why your life is harder than it need be. If you are a sometimes PHP coder that specializes or prefers another language, visit for more reasons to shun PHP, and hope your language doesn't share more than a smattering of these problems. If you never program PHP, visit for the smug satisfaction of a landmine evaded.

2011-05-26

Distributed Workers

I have a project coming up where I'll need to utilize distributed workers. It's a bit odd in that the workers will come and go, and they'll most likely need a copy of the subset of the data they are working on so they can efficiently process it, but the workload is very time dependent. Put another way, I need to keep data synchronized in some fashion between the master server and the remote client in a way that is testable.

I'm thinking I'm going to see problems where two works are working on the same data set but in doing so get slightly different results. Returning different results is not only possible, but probable, since processing the data requires them check a resource that sometimes flaps between values when in transition, for minutes at a time.

Here's the criteria for the system as I see it so far:

Server:

  • Canonical data source; Data stored in some sort of DB

  • Accepts registrations from clients/workers

  • Creates jobs/tasks in a work queue

  • Assigns jobs/tasks from work queue to registered workers

  • Accepts results from workers or times out task after appropriate wait



Client (worker):

  • Mostly shared code base (re-use modules defining data as objects)

  • Registers with server

  • Accepts tasks from server, processes data, returns result

  • Keeps copy of current set of data it is responsible for processing, only returns changes to data, not whole update



Here's what I'm wondering:

How much of this is based in my assumptions for what I'll need underneath? I've already thought of the DB structure needed to support this, and how I'll link between all the structures in the data. If I assume I'm using some sort of NoSQL solution, such as MongoDB, CouchDB (or whatever it's called now), or something else, are there assumptions I can make about the system that reduces complexity?

Are there modules available (preferably in Perl) to manage some of the work assignment tasks for me?

I would prefer to pass object state back and forth for the tasks. I can imagine passing an object name and a way to initialize that object to the state defined, that's not too hard. I DO want to have the objects that I'm passing easily abstracted to the DB on the server side. If I have the workers contain the same object code, can I do that without requiring the client deal with DB code? That is, can I easily abstract the object ORM layer out from the client? Maybe with roles using Moose?

If I use Moose, I know there's a startup speed penalty, which is not a problem. I'm more worried about any execution inefficiencies, since this is time dependent (to a sub-second level, but not quite ms dependent level. I haven't had a chance to use Moose in a project yet, so I'm not aware of the specifics. I do hear it's tunable so I can omit features for speed, which is a nice trade-off.

Some representation for the changes in a data structure, or just JSON if that works as a common format, would be very useful. If I can find a module that provides this, great. Otherwise, I suspect I'll be writing my own after researching data diffs.

In any case, I'll update here as I come to conclusions or find solutions.

2011-05-25

Roku's own tutorials

Over at the Roku Blog RokuChris has posted a set of "Hello World" tutorials for the Roku using different components. For anyone that understands Part 4 of my tutorial set, they shouldn't be too hard to fully grok.

These are different that my tutorials in that they use different components that are more likely to be in use in the average video streaming channel. My tutorials are aimed more at using one component that has a lot of variability to examine the language and give people a solid start on the language.

In any case, if you've been following my tutorials, I advise you take a look!

2011-05-23

Start developing for the Roku Part 4: Iterate!

This is the fourth part of a multi-part series. Please be sure to check out Parts 1 through 3.

So far our channel displays a few rectangles we've defined previously. Now, let's explore some code constructs that allow us to process work in a more effective manner.

For now, we'll keep working with the same main.brs file as in the previous parts. This time, we're going to add the following code after the last setLayer() method call, and before the canvas.show() call:

' Create iteratively smaller boxes using a loop
' Array of colors to use
colors = [ "#AA0000", "#0000AA", "#A0A0A0", "#F0F0F0" ]
' Initial location data
shapeLocation = { x: 100, y: 100 }
shapeSize = { w: 400, h: 300 }
for i=0 to colors.count() -1
' Access desired color
c = colors[i]
' Update the location and size data
shapeLocation.x = shapeLocation.x + 25
shapeLocation.y = shapeLocation.y + 25
shapeSize["w"] = shapeSize["w"] - 50
heightVar = "h"
shapeSize[heightVar] = shapeSize[heightVar] - 50
' Add shape to canvas
print "Adding shape"
print shapeLocation
print shapeSize
canvas.setLayer(1+i, {
color: c,
targetRect: shapeSize,
targetTranslation: shapeLocation,
})
end for

That's a doozy, huh? Well, let's get to work explaining this.

The first new non-comment line is assignment of an array. Not an associative array, as we saw previously, but a regular array, which is of set of sequential items. In this case, we are assigning four strings, each a hex color code, to a variable called colors. We'll be using these colors later.

Note: BrightScript isn't picky about what items go in an like some languages. For example, we can easily create an array with the first item being an integer, the second being a string, and the third being another array or associative array.


Next we assign two associative arrays to the variables shapeLocation and shapeSize. These hold the same things their names suggest, but we'll be changing the values we set here later.

Loopty Loop

Here we've gotten to something really new, a for loop. If you don't know what that is, I'll let Wikipedia explain the gory details, and summarize it here as a way to repeat a chunk of code multiple times. Here we are looping by initializing a variable i to 0, and looping until it's no longer true that i is less than colors.count() (i is automatically incremented by one each pass through the loop).

Now, there's one other thing not explained about the loop statement we just went over, and that's what colors.count() means. In this case, since colors contains an array, count() is a method provided for arrays which returns the number of items in the array. Here the value returned is 4 since that's the number of items we set in the array on creation. If we had added or removed items since then, the count() method would represent the current number of items in the array at this point.

Note: By convention we indent while within the loop. This provides an easily identifiable visible clue that this code is slightly different than the surrounding code (it may execute multiple times). Indentation and other non-enforced formatting are a very important part of the source. Ignoring the benefits they bring to the readability of the program source will most likely cost you later.


The first thing we do within the loop is assign the color we want this box to be. In this case, we take the loop variable i and use it as an index into the colors array using square brackets. The first time through the loop, i is 0, so we access the 0th (first for all you non-computer science people out there) item of the array. The first pass through the loop that's the string #AA0000, which is what the variable c now contains.

Note: It may seem odd that we are looping from 0 to the number of items in the array colors minus one, but that's because of a very particular fact of history; the C proggramming language is the most commone one on earth, and 40 years ago it was defined with arrays accessed in this manner (it actually makes some sense in context). We've been living with it ever since in many, many languages that claim some C heritage. Just remember when accessing array elements that item 0 is the first item, and the number of items in the array less one is the last item.


Accessing and Changing Arrays and Associative Arrays

The next 5 lines of code are all accessing and setting the elements of the shapeLocation and shapeSize associative arrays in various ways. I'll cover them in order:
  • shapeLocation.x = shapeLocation.x + 25
    Element x of shapeLocation is set to the value of element x plus 25 more using dot notation.
  • shapeLocation.y = shapeLocation.y + 25
    Element y of shapeLocation is set to the value of element y plus 25 more using dot notation.
  • shapeSize["w"] = shapeSize["w"] - 50
    Element w of shapeSize is lessened by 50 using array subscript syntax
  • heightVar = "h" and shapeSize[heightVar] = shapeSize[heightVar] - 50
    Here we set a new variable, and then use that variable to access the appropriate element of shapeSize. This is the real power of the array subscript syntax.


Note: While the same code is executed each iteration of the loop, the values of the variables end up changing each time. For example, each pass through the loop ends up reducing shapeSize.w by 25, until it eventually ends up at 200, after starting at 400.


Display More Shapes

After a few inconsequential prints to the debugger console, the next thing we do within the loop is draw another rectangle using the setLayer() method. Here we set the layer to 1+i, so we don't overwrite the item we already drew on layer 0 in the first pass of the loop, and draw each of the subsequently smaller rectangles using the color, size and location we've computed earlier in this pass of the loop.

Finally, we end the loop with an end loop statement, and following convention de-indent the code from here on. That's the end of the new code for this part of the tutorial, and that's plenty if I do say so myself. Below you can find the complete contents of the new source/main.brs file, with a few extra spaces and comments thrown in the pretty it up.


sub main()
' Crate canvas component
canvas = CreateObject("roImageCanvas")

' Set background color (no location data means full screen)
canvas.setLayer(0, { color: "#884400" })

' Display a shape
newShapeLocation = { x: 300, y: 200, w: 200, h: 100 }
canvas.setLayer(10, { color: "#00BB00", targetRect: newShapeLocation })

' Display some text
newTextAttributes = {
color: "#0000CC"
font: "Large"
Halign: "Hcenter"
Valign: "Vcenter"
}
canvas.setLayer(5, {
text: "Hello World!",
textAttrs: newTextAttributes,
targetRect: {
x: 200, y: 200, w: 200, h: 100
}
})

' Create iteratively smaller boxes using a loop
' Array of colors to use
colors = [ "#AA0000", "#0000AA", "#A0A0A0", "#F0F0F0" ]
' Initial location data
shapeLocation = { x: 100, y: 100 }
shapeSize = { w: 400, h: 300 }
for i=0 to colors.count() -1
' Access desired color
c = colors[i]
' Update the location and size data
shapeLocation.x = shapeLocation.x + 25
shapeLocation.y = shapeLocation.y + 25
shapeSize["w"] = shapeSize["w"] - 50
heightVar = "h"
shapeSize[heightVar] = shapeSize[heightVar] - 50
' Add shape to canvas
print "Adding shape"
print shapeLocation
print shapeSize
canvas.setLayer(1+i, {
color: c,
targetRect: shapeSize,
targetTranslation: shapeLocation,
})
end for

' Show the canvas
canvas.show()

' Print something to dbugger console
print "canvas shown"

' Sleep so the channel doesn't end immediately
sleep(5000)

end sub


Go ahead and package and upload the channel now. You should see what looks like square rings around the original "Hello World" text (or what is visible of it, at least). This is because each subsequent rectangle was a higher layer then the previous and obscured the previous, yet they were all lower than the text we set before the loop, which stayed in front and thus visible.

That concludes Part 4 of the tutorial. Next we'll go into a bit more detail on functions, and how to use them. I'll try to make it a bit shorter than this tutorial, which ended up going a bit longer than I hoped.

2011-05-22

Tutorial part 2 updated

Part 2 of the Beginning Roku development tutorials has been updated. It was horrible before. Hopefully it's slightly less so now.

2011-05-17

Start developing for the Roku Part 3: More to the picture (associative arrays)

This is the third in a multi-part series. Please be sure to check out Part 1 and Part 2.

When we last left off, I had just shown how to side-load, or upload, your channel to the Roku using the developer mode interface. If you used the simple channel we created in the Part 1, it should have resulted in an orange screen that persisted for 5 seconds. Now, it's time to add to that.

Now, we are going to add a colored box, and some text. To do so, take the main.brs file from Part 1 and add the following lines after the single existing canvas.setLayer call:

' Display a shape
newShapeLocation = { x: 300, y: 200, w: 200, h: 100 }
canvas.setLayer(10, { color: "#00BB00", targetRect: newShapeLocation })

There's a few things going on in these new lines, but first I'll explain that together they add a mostly green box that is 200 pixels wide and 100 pixels high starting 300 pixels from the left side of the screen and 200 pixels from the top of the screen.

The first line is a comment. Anything after a single-quote character until a newline is considered a comment, and it not evaluated as BrightScript code.

In the second line, we assign an associative array to the variable newShapeLocation. Associative Arrays are created automatically when curly brases are used, and consist of colon separed key-value pairs, themselves separated by commas (or newlines, as we'll see later)

Finally, we have another setLayer() call, and this time we are specifying the location of the shape we are drawing. You can see now that setLayer() expects an Associative Array for the second argument, and the placement information is supplied as another associative array under the key targetRect. We could just as easily have called setLayer as so:

canvas.setLayer(10, { color: "#00BB00", targetRect: { x: 300, y: 200, w: 200, h: 100 } })

...but that is't quite as easy to read, is it? Later we'll cover formatting to alleviate this issue somewhat.

Okay, now that we've added a colored box, lets add some text. The following should accomplish that:

' Display some text
newTextAttributes = {
color: "#0000CC"
font: "Large"
Halign: "Hcenter"
Valign: "Vcenter"
}
canvas.setLayer(5, {
text: "Hello World!",
textAttrs: newTextAttributes,
targetRect: {
x: 200, y: 200, w: 200, h: 100
}
})


Here we see another comment, another associative array, and another call to setLayer().

Notice how the newTextAttributes associative array spans multiple lines? This is the formatting technique I mentioned before to make the data more reasonable. Note the missing commas; in multi-line associative arrays they are optional (and as such supplying them won't hurt).

Finally, note how the setLayer() call is extended over multiple lines with the associative array, and the targetRect is defined directly as another associative array within the first. We could just as easily have passed a variable containing another associative array in its place as we did before, but with this formatting this is easy to read as is.

Adding these all into the original main.brs file results in the following:

sub main()
canvas = CreateObject("roImageCanvas")
canvas.setLayer(0, { color: "#884400" })
' Display a shape
newShapeLocation = { x: 300, y: 200, w: 200, h: 100 }
canvas.setLayer(10, { color: "#00BB00", targetRect: newShapeLocation })
' Display some text
newTextAttributes = {
color: "#0000CC"
font: "Large"
Halign: "Hcenter"
Valign: "Vcenter"
}
canvas.setLayer(5, {
text: "Hello World!",
textAttrs: newTextAttributes,
targetRect: {
x: 200, y: 200, w: 200, h: 100
}
})
canvas.show()
print "canvas shown"
sleep(5000)
end sub

Packaging and uploading the channel now should result in an orange display, some blue text saying "Hello World!", and a green box the partially obscures the text. The reason the box obscures the text even through we defined the text later has to do with the layers we set for each (the first argument to setLayer()). The higher the layer, the "closer" the object appears to the viewer, with closer objects obscuring older ones.

This concludes Part 3. Next, we'll look at loops, regular arrays, and accessing associative array components.

Start developing for the Roku Part 2: Packaging and uploading

This is part 2 in a series. You may want to see Part 1 to figure out how we got to this point.

Part 2 of this tutorial will cover how to package and upload your channel to the Roku. This allows the Roku to compile your code and report any errors it encountered, and run the channel so you can test how it works.

Now that we have something to upload to the Roku, we need to package it for side-loading. Side-loading is the process of manually uploading a channel you've created using the Roku's developer mode. It doesn't require any special utilities out of the ordinary, and is very easy, but only one channel can be side-loaded at a time.

There are three steps to side-loading your channel:
  1. Enable developer mode
  2. Package your channel
  3. Upload your packaged channel
These are each extremely easy, and only #2 and #3 need be repeated to side-load after the first time.

Enabling Developer Mode

The first step to side-loading a channel is to enable developer mode. To enable developer mode on the Roku, you need to enter the3 following sequence on the remote: Home, Home, Home, Up, Up, Roght, Left, Right, Left, Right. This should cause a special "Developer Settings" screen to come up, which offers you the option to enable (or disable if it's already enabled) the installer. It will require a restart of the Roku, but after that you should be able to side-load channels without problem.

Packaging Your Channel

Packaging your channel for side-loading onto the Roku really just means compressing your channel into a ZIP archive. Most modern operating systems ship with some sort of built in archival utility that can create ZIP archives, but if your operating system doesn't, you can download either WinZIP or WinRAR's free versions for your OS and use that to create the archive.

Note: While packaging for side-loading requires only creating a ZIP archive, packaging for upload as a Private or Public channel requires the extra step of signing the package. This is covered in the Roku SDK in the Packaging and Publishing document, and may be covered in a future tutorial.

The important thing to remember when zipping your channel content is that the channel folder itself should not be part of the ZIP file. That is, if you examine the contents of the ZIP file, you should see a manifest file and source folder at the top level (plus any additional files, such as an images folder), NOT a single folder containing those items.

Uploading your channel

To finish side-loading your channel, you need to upload it to the Roku. To upload just browse to your Roku's IP address in a browser, which should bring up the channel packager and installer interface. Simply click the button to select your packaged (zipped) channel, and then click (or if you've already got a channel side-loaded) to upload the channel.

Note: You can find your Roku's IP address by going to the Settings section and choosing the Player Info section.

Upon uploading your channel, it will run automatically. It will also show up as the last channel in the list of channels on the Roku main screen so you can start it again without uploading it. Currently, it won't have a useful image, but that's because we haven't defined one in the manifest file.

Note: When a side-loaded channel is running, you can telnet to port 8085 using the Roku's IP address, and you'll have access to a debugger console. This will show the output of any print statements, as well as any errors encountered in compiling or running the code.

When the channel runs, you should see an orange-ish screen for 5 seconds (if you are uploading the package from Part 1) before it returns to the Roku home screen. If you had a telnet session to the debugger console open, you should also have seen a notice that the channel started running, and the output from the print statement we put in the channel in Part 1.

This concludes Part 2. In Part 3 we'll draw some more shapes on the screen and examine a few of the BrightScript core data types available (Arrays and Associative Arrays).

Changelog:
2011-05-22 21:47 Re-wrote and re-formatted.

Start developing for the Roku Part 1: My first channel

There's quite a few posts on the Roku Developer Forum about how to get started developing for the Roku, given that the platform uses a proprietary language called BrightScript. So much so, in fact, that I decided it's time to write up a few tutorials on how to get your first channel going, starting from scratch.

This isn't intended for those intending to just take an example channel (of which there are many in the SDK) and alter it, but for those struggling with the core concepts of the language, or that have trouble following what's happening in the examples provided.

This is Part 1, which covers how to create the content of a very simple channel. Part 2 will cover how to package and upload this channel to the Roku.

First, we need to create the directory/folder structure for the channel, which will look like this:

-- myfirstchannel
|-- manifest (a text file)
|-- source (a folder)
| `-- main.brs (text file for BrightScript code)
`-- images (a folder)
`-- ... (no contents currently)

As you can see, we have a folder for the channel, containing a manifest file, and two more folders, source and images. Within the source folder there is a file called main.brs which will contain our starting code.

Note: The file containing the code can be called anything as long as it resides within the source folder or a sub-folder of it, and ends in .brs. More specifically, any file ending in .brs in source will be concatenated together at run-time, so the specific file names don't matter at all.

First things first, we need some content for our manifest file. Let's use the following for now:

title=Tutorial Channel
subtitle=The basics of BrightScript programming
mm_icon_focus_hd=pkg:/images/not_here.png
mm_icon_side_hd=pkg:/images/not_here.png
mm_icon_focus_sd=pkg:/images/not_here.png
mm_icon_side_sd=pkg:/images/not_here.png
major_version=1
minor_version=0
build_version=00000

In Windows, you'll want to make sure you can see file extensions, and that there isn't one. The file contains text, but it doesn't have a .txt extension. If you double click the file and it automatically opens in a text editor, you need to look into how to rename the file so there is no extension. Also, please make sure you are saving these files as ASCII (non Word/RTF) text.

Now we need to add some content to the channel. Let's open up (create) source/main.brs in a text editor and add some content:

sub main()
canvas = CreateObject("roImageCanvas")
canvas.setLayer(0, { color: "#884400" })
canvas.show()
print "canvas shown"
sleep(5000)
end sub

Here we have a few things to examine. Firstly, there's a subroutine (a function without a return value) called main. This is how the Roku determines where to start executing code. After concatenating all the source files together, and parsing the code, it looks for a function or subroutine called main to run, and starts there.

Second, we create an object of type roImageCanvas and assign it to the canvas variable. The CreateObject() function is how you access built in BrightScript and Roku components that have been provided to enhance the platform. Almost every complex component of the system will be created with a call similar to this.

Third, we call the setLayer() method on the canvas object to assign some data. in this case, we are creating a layer 0 with an orange-ish color (as specified by the hex color #884400). Don't worry too much about the curly braces, we'll cover those a bit later. Just remember that we passed in a color attribute when we set the layer.

Fourth, we call the show() method on the canvas object, which will cause it to actually show on the screen. Without this command any changes to the canvas object are non-visible.

Lastly, we print a simple statement that the canvas has been shown, and sleep for 5 seconds (5000 ms). The print output won't be visible on the screen, it is only output to the debugger console, but we'll see it later. The sleep statement pauses execution for a time. If we didn't do this, you wouldn't see much for your channel, as the main subroutine would end, and the channel would exit back to the Roku main screen almost immediately.

That concludes the first part of the tutorial. In Part 2 we'll cover how to package the channel you just made and upload it to the Roku for testing.

2011-05-09

Perlbrew to the rescue

Chromatic recently posted about the support lifetime of Perl, and it's extension through enterprise distributions. While I don't particularly buy his arguments against enterprise need for back-patching and supporting older versions of Perl (and I suspect neither does he completely. He always strikes be as somewhat of a provocateur, a noble profession), I do agree that App::perlbrew is part of the solution.

While we seem to be in agreement that perlbrew is the solution, he seems to think (it's ambiguous in the post) that perlbrew can't be included in existing enterprise releases, such as RHEL 5 (which I'm most familiar with, and will restrict my examples to). I don't see any major reason that it can't be made available to existing enterprise distributions (in a supported manner, even). RHEL has a long history of providing feature enhancements and new packages/programs in their point releases (as opposed to the strictly bug and security fixes between point releases), so including perlbrew would be easily accomplished. Even if RHEL doesn't want to include it for whatever reason, getting it included in CentOS through their extras would be trivial (well, as trivial as doing anything with the CentOS developers is these days), and provide a real enhancement.

Of course, the Perl versions installed from perlbrew themselves would not necessarily be supported, but that's an easy point of demarcation to define. Different support policies could (and should) be defined for applications developed and/or deployed on a platform, as opposed to the platform itself. This allows for easy updating of subsystems that aren't related to the deployed application, but are required for security reasons.

I remember the most recent time I was migrating (and updating) RT. I spent quite a while swimming through dependency hell, making RPMs of all the required CPAN modules that weren't already available through RPMForge, EPEL and the like. During the final stages of that, visions of making my own RT bundle that auto installed Perl through perlbrew, along with the latest relevant CPAN modules were definitely dancing through my head. I think the world is a more barren place for my lack of motivation after the migration project was complete.

Now, for anyone who really just doesn't get why enterprise distributions need to keep the old version of Perl around, consider the following; enterprise distributions need the ability to ensure that during any update, nothing can or will go wrong (at least as much as they can). This often means limiting an installed program to the original shipped version, and back-porting non-conflicting features. When you have to upgrade hundreds of servers, this is essential. This is the continual struggle between system administrators and developers. Both groups are striving for stability, maintainability, and security, but these concepts mean slightly different things to each group. The beauty of perlbrew is it allows each group to have their own sandbox that they can correctly apply their goals to.

Please note than while I'm not sure the support requirements of perlbrew itself, if they aren't as minimal as possible to run on as old a version of Perl as possible, than I see that as a serious design flaw. I don't believe this to apply to CPAN modules in general though.