Saturday, August 22, 2015

Objective measure of resources


A recent post by a friend got me thinking:

Part I - Theory

There should be an *objective* measure of whether a resource like this any good...when compared to other resources of similar technicality. That caveat is so that you can't compare a collection of 3D art to a collection of png sprites, or something - ideally for the comparison the resources must be directly comparable.

Theoretically, even minor technical dissimilarities (such as png vs svg), or even picture size, such as 1024x800 vs 1440x1200 should count against comparing two resources, except where it can be shown that the resources are technically similar (svg vs ai format, or you include both png and svg, or you have many types/sizes of images, or maybe your technical resources contain both textures (images) and 3d models - although it should note that the technical similarity should be as close as possible).

Moving on, if a resource is technically similar to another one, then I propose a basic objective metric should apply - quantity vs quality.

i.e., a good resource should have a high, high number of 'units' and categories which are logically independent, and should be populous, i.e. there should be many similar but slightly dissimilar pieces.

If you have a lower number of units, then your resource is necessarily sparse. If you have a high number of resources, but they are all highly logically correlated, then you have extremely low diversity (maybe your resource is a bunch of cat pictures). Finally, even if you have a high number of resources, you do not want a 'single re-presenter per bucket', i.e. the point at which you are flooded with categorical examples (so you have one of each type of animal for example), but few representations of each animal.

Since, naturally, 'larger' libraries of resources should have 'accidentally' larger numbers of categories and representations, it should be possible to compare a large resource to a small resource for the metric of quality, i.e. what is the ratio of categories to representations, and for resources with with other varying characteristics, where one part is greater than the other, such as one collection with a greater collection of categories than the other other, but a difference in representation and a similarity in size, this ratio should make it possible to objectively compare otherwise disparate and seemingly incomparable resources.

Finally, I propose that there is one final metric with which to compare two resources - market relevance. In order to apply this metric, as with the technical reference, it should only be applied to markets which are extremely similar (photography databases and iconography make poor comparisons, for example).

Specifically, it is like so - if a resource contains a relatively 'high' number of 'popular' resources compared to another resource, it should be worth more than a resource with a good category/representation ratio, but a low 'popularity' relevance - it is possible perhaps that the most useful to a video game market in iconography is e.g. weaponry, and while one resource provides an abundance of flowers, cooking apparel, and small creatures, it has a low number of weaponry, while the second resource which contains only say weaponry and cars and a small number of creatures, flowers, and cooking apparel, has much more inherent worth.

So I propose, to say objectively whether a resource is good or not, you should compare it with a set of other resources, and provide the following comparison:

representation/categories ratio (higher is better)
category count comparison
size comparison
utility/size ratio (higher is better, utility is out of 1, i.e. the best possible scenario is that every image is used)

Perhaps this is another comparison:
'untapped potential' comparison - utility/size ratio is similar, however one resource contains considerably more resources than another, i.e. there are more unharnessed patterns and therefore more inherent worth per item.

Part II - Back of the envelope application

And now for a bit of hand waving because I am a bit too lazy to actually crunch the numbers properly...this collection contains 102 self-styled 'categories', with it looks like on average 30 representations per that alone, it looks, decent, however we are lacking anything to compare it with:

Maybe not the best resource in the world, however it was one of the first off the search engine. So it can't be that bad. Nevertheless, I'm not going to use it, because it seems heavily focused on web-icons, whereas yours was more 'game icons'. So I would say it is technically dissimilar.

After looking a bit harder, I found this.

939 'icon packs', i.e. categories, with on average 60 (I'm ballparking) icons per 0.06 ratio

compared to 30/102 - 0.29

102 vs 939 - its clear who the winner is here...
size comparison - 60*939 ~= 10k, roughly 10x the size of this collection...
utility/size ratio - this is the interesting comparison, looking briefly at the site, there were a high number of 'gamey' icons such as e.g. weapons (some categories here of 100 vs the average of 30, probably 2std devs away from the average and hence enough to be significant).

The problem being, there were maybe 1-2 of these 'utility' categories, and the rest were all what looked like...junk. Worse than that, some of these categories were broad, and when you looked at them, the icons within were highly dissimilar to the point that you could question if they really even belonged to the same category.

On the other hand, flaticon had  many categories which definitely looked useful to game design. There is definitely a so-called Big Data problem hidden in here, as to determining which icons were better correlated with their categories and how relevant they are to game design.

However, given that there so much doubt, from a quick glance, between measuring the metrics of flaticon vs, I'm skeptical that game-icon's quality is much better than that of flaticon's.

Saturday, August 15, 2015

Building a computer from scratch - really.

One of my longer term goals is to build my own computer.

And, no, I don't mean build a computer tower, that's easy, that's for any middleschooler/highschooler chump with money.

I mean actually designing the whole computer from scratch.

The two biggest problems with doing this are:

  1. Case Design
  2. Circuit board design
And then a bevy of issues regarding having free/open source drivers, chipsets, etc., most of which are privately owned at the moment. Biggest chipset issues revolve around Intel, so I think if you want to build your own computer you are pretty much not going to touch Intel at all (AMD ftw?).

There are a number of reasons to want to do this:

  1. Ultimate control over design
  2. Ultimate control over components
  3. The ability to keep your system up to date far beyond what even 'building a desktop' would allow.
Anyways, that was really all just a leadup to the topic of this post - materials - does anyone know of a good, easy to manipulate material for doing this sort of stuff?

3D printing sounds like the wave of the future, until you do a simple calculation on the price:

For a 2"x13"x9" sized laptop, that's 3834 cm^3 of material, shapeways lets you custom build something at
0.28 $/cm^3, so that comes out to be a price tag of 1073$ for just the chassis. I've not even bought any parts yet.

So, what I'm estimating the cost to be for e.g. a custom build laptop:

1k$ in chassis
1k$ for board
100$ for hard drive
100$ for RAM
300$ for CPU
200$ other expenses...

2.5k$ at least.

Hmm. That number needs work. And a lot of implementation.

Sunday, August 9, 2015

Hi! *waves*

I'm not dead. :)

Note: I will be using this post like a scratchpad. Check back for updates - once I am finished I will break out into multiple, organized posts.

I just don't post much anymore.


I'm doing a system conversion of Windows, not a particularly hard procedure, but I thought I'd document my procedure/findings.

I'm basically aiming to reboot windows clean every couple months or so (keep things clean and working well), however I have a large number or installs/setup to do every time, so this has been a pain in the past.

I think also by doing this it may make things easier for people in the future who need to setup a system clean.

I will post more details later, however the basic strategy goes like so:

Move all (large) files to external/internal secondary hard drive.
Run as much stuff off this secondary drive as possible.
Slipstream windows updates into install image.
Reinstall every couple months.

Steam makes this pretty simple for games - you can simply move your entire folder over to your new drive, and run steam again - it should find all your installed games for you.

However, I'm in the process of trying this out at the moment, so I will update once I have more specific instructions and/or success.

Basic Plan:

  1. Backup to drive - mostly done
  2. Slipstream ISO - done but didn't work?
  3. Install & update - done
  4. Reinstall/check programs - done
  5. Script to help with most of this - not started


One of the best tools when doing a migration are the ones which tell you where all your space is, so you can plan accordingly. I prefer SpaceSniffer, which can be found here:


  Move, two junction points
  Issues with reloading profile/cookies/etc.
  Need to point icon at \Google\Chrome\Application\chrome.exe from %programfiles%


  web installer (for 9)
  old versions?


Problem points:
- AppData
  Easy to backup, hard to 'put back'
- User docs & location
  Easy to move, how to point to another directory by default?
- Program Files
  Horrible name, horrible location - Registry hack to move
- Steam
  Install to off drive area
- Development
  Large, bulky tools, difficult to re-install
  ISOs, chocolatey
- Browser data
  Most of it lives in AppData, nice to just move by default
- User defaults & Settings
  Registry export?
- Large folders with thousands of small files
  Simple but difficult to effectively backup


Installing Windows, etc.


Preparing images


Putting files in the right place(s).

Notes, Caveats, Bugs

Problems, hiccups, shortcomings

Saturday, July 20, 2013

What is LGPL - why you should consider it for your next commercial application.

So I had someone ask on G+ today what frameworks to use for C++ for commercial development. They had mentioned that Qt has a commercial, paid license - but they clearly hadn't heard of the LGPL.

So I wanted to clear that up.

So what does LGPL mean?

You put a bunch of dlls in your application. This pretty much only practically affects Windows - as there is little choice on Mac to do anything else if you want to use the .app format, and in Linux you can simply specify your dependency in your package.

Note, however, that there is no real change in size in your application. In all cases its a packaging issue.

You can't modify the Qt source code without contributing those changes back. And you have to provide a Licenses.txt file with a notice in it that you are using Qt and link to Digia's website.

So don't believe anyone who says that Qt is not free for commercial development - they don't have a clue about what they're talking about.

Qt is perfect for commercial application development. Its awesome, it runs on each platform, it has a modern, simple nice API, its well maintained and easy to use, and there's no reason you can't use LGPL and switch to Commercial if you ever decide you need that extra utility.

In case you don't believe me, here is the official wording for the license:

"5. Combined Libraries.

You may place library facilities that are a work based on the Library side by side in a single library together with other library facilities that are not Applications and are not covered by this License, and convey such a combined library under terms of your choice, if you do both of the following:
  • a) Accompany the combined library with a copy of the same work based on the Library, uncombined with any other library facilities, conveyed under the terms of this License.
  • b) Give prominent notice with the combined library that part of it is a work based on the Library, and explaining where to find the accompanying uncombined form of the same work."

Monday, May 13, 2013

From a recent Ars Technica article:

With plugins and apps, there's no meaningful transition to a DRM-free world. There's no good way for distributors to test the waters and see if unprotected distribution is viable. With EME, there is. EME will keep content out of apps and on the Web, and it creates a stepping stone to a DRM-free world. That's not hurting the open Web—it's working to ensure its continued usefulness and relevance.

That's the hope.

Actually I see two problems.

DRM is popular not because it works, [b]but because it doesn't work[/b]. Anti-DRM proponents like to spout that it doesn't work, which is unfortunately true. Its so easy to circumvent, that patch authors/pirates are constantly circumventing it. Which compounds the problem as now studios feel like they need to protect their work from these pirates while still maintaining profits; profits which likely accrue [b]thanks[/b] to the piracy which their DRM encourages.

If DRM really worked, I suspect Hollywoods profits would actually plunge. No one would buy it, and those few who did would be very reluctant to share with friends etc. Marketing would go through the floor, not to mention the cheap thrill of picking up pirated goods would disappear (who doesn't like a good pirate? - Hollywood has four films illustrating its popularity *wink* ).

So the first real problem is pirates. Its not circumvention, its networks of people whom for whatever reason have decided they want to spend all day cracking, ripping, and re-distributing stuff for free (and I don't think they are altruistic ends *cough* viruses *cough*).

The second problem is that for all the anti-DRM speech, I hear very little about the alternatives. How do you sell videos to the masses, online and for a small fee? Netflix yeah, but that uses DRM. I do think it is possible, but I don't know that anyone champions DRM-free solutions very well, or has setup any frameworks for this.

One alternative which I think could work is a system that puts responsibility on the shoulders of the sharer. Think youtube sharing, but on a lower level. You can sell a small locked file that can be sold with a key, but the goal is not to key the file locked up. Instead, the goal is to ensure that anyone can easily open the file, wherever/whoever they are, provided they have a key, but that then it is unmistakeable whom has done so. So yeah, they can share with their friends, put it on a flashdrive, but if they start widescale uploading to warez sites they are relatively easy to track down and put out of business.

Wednesday, July 18, 2012

Crazy Code (A ReBlog)

Ever feel like the world is filled with crap developers who think too much of themselves sometime?

You may not be alone.
There seems to be consensus: there are too many "cool" bits of software out there
(which may in fact be just crap written by Monkies sleep deprived developers).

This is why time proven frameworks are amazing; they make code that goes, that works with other code, and works with large bodies of employable developers. I'm beginning to see the point behind consistent, proven, legacy systems, as much as bleeding edge might present its visceral allures.

This is NOT an argument against writing your own shims/libraries. In fact:
  1. Its an amazing learning experience. Every library you don't write is in fact a set of problems you never learned to solve, which you may need to solve still because said library wasn't general enough.
  2. Sharp vs Dull -
    1. Generalities are hard, and tend to be too weak to handle most cases effectively, only powerful enough to handle most cases just well enough
    2. General libraries also tend to be large, especially if they're advertising how small they are. They may also be difficult to pick up
    3. In house libraries can be incredibly light weight, and efficient, however they aren't going to make the toast or mow the lawn for you as well.
    4. In house libraries can be easily picked up by new recruits, and since they're easier to maintain in the first place, may be better documented
Its really whatever's best for the job. Monolithic libraries save everyone lots of headaches, but then so do agile, discrete service orientated libraries. Perhaps the discrete service libraries should be twice as carefully out-housed.
Me? I'm sticking with DOM+CSS+jQuery, Java/Python/Perl, and C++/C. Here's to another fifty years of consistent software!


(Btw, love the new inline html editor, Google! - you may have noticed the atrocious inline styling ;) )

Wednesday, January 26, 2011

Hello Code Blog!

First, an introduction.

I, am a computer engineer/computer programmer, partially in training (I am still a student).

That is, to say, I know my way around the computer, I understand some of the fundamentals. And, I love code, I wish to know more about code, and I hope to further and share this love through this blog.

This blog will tend towards presentation of compact and clean code snippets, as well as general topics of code, etc..

As my first entry to this blog, I will be presenting a simple loop in Java. I make no claims as to its originality or perfection, in fact I suspect there to be better loops out there. Its purpose, in short, is classic; to find the index of a given number for a particular function. The caveat; it must all fit on one line, in Java.

A little background on where this came from; my professor in Algorithms class was going over arrays, and how to loop through them and retrieve values using a for loop. I made the comment that what he was doing could be simplified and turned into a while loop with a bit of ternary logic. Although my first stab left out a couple edge cases, I eventually came up with the following answer, which can in fact be compiled in one line, and will correctly select the first matching value in an integer array.

while((i++<s)&&!(f=(a[i-1]==k?true:false)));return f?(i-1):-1;

At 63 characters, there are likely several optimizations that could be made to this, which I may explore in later posts. For now, feel free to download the code and hack on it, which I've licensed under the CC license  Apart from the comments the file is implemented on a single line, at 238 characters. Personally, I find it rather interesting that Java as a language is cluttered to the degree that 175 characters must be devoted to simply setting up the loop. Even if you discount  array initialization and declaration of 30 characters, which I setup as a 3 element array, and the call to print the output which takes up 23 characters, and add 8 (",int[] a") characters for passing the array in as a parameter to the loop function, you still have 130 characters being used for semantics, which is roughly half of the overall code. I suppose I could save on the character count here by removing the static qualifiers, however all of this is relatively moot, since the point is to get something executable on one line.

Please find the download on the Google Site page below:
oneLiner download
print "#3110 vv021|)"