Racial discrimination is ridiculous

Why is discrimination wrong ?

I think in the past decade, we are witnessing a resurgence of racial hatred around the world.

Greeks are lazy
brexit
terrorists
anti-japanese sentiments among chinese and koreans
anti-chinese sentiments in the world (c.f. modern purloin of south china sea islands)
anti-russian (c.f. crimea annexation)
anti-syrian migrants
Austria’s extreme right winning elections

Is this all just a feeling ?
Possibly. A cognitive bias called exposition bias, enabled by media over-coverage.
Unfortunately this could be the same issue than the self-fullfiling prohecy:
http://study.com/academy/lesson/self-fulfilling-prophecies-in-psychology-definition-examples.html
(A self-fulfilling prophecy is when a person unknowingly causes a prediction to come true, due to the simple fact that he or she expects it to come true)
Applied in this case I mean that the society is self-reinforcing hatred by propagating bullshit.

This has a tendency to easily get traction among mobs, why ? that’s another psychological effect covered in the book 1984.
If the internal society is unwell, or somehow ill-at-ease, the discomfort will have a propensity to make people look for culprit, re-inforce group cohesion, and point outside.
In 1984 it is the virtual enemy: EurAsia.
Today’s government are using this effect, point and accuse the outsider of being responsible for all internal issues.
Europe is responsible for Britain discomfort.
USA is responsible for North Koreans discomfort.
Japan is responsible for China’s suffering.
Terrorists are responsible for USA’s troubles.

Of course not. The true issue is yet again, a psychological one. On the matter of being depressed I believe.
Our societies do not fulfil individuals enough so that they are at peace enough with themselves.
The cause ? Not sure. But some ideas:
– too much individuality, some people have proposed that the way of living separated from our group in individual or couple apartment is part of the reason.
Personally I think it varies greatly between persons who have strong dependencies on others for social validation, and people who can live alone just great and are better this way. We all need our time alone for zenitude.
– too much materialism. That contender seems more prone to be the real cause.
Some trends such as minimalism, seems to be the way out of this for some people. Also depicted in the movie “Up in the air” with George Clooney. I Love it because I identify a lot with this.

We are rotting in an environment that’s not suited to our nature it seems, and that places us in a relative average discomfort. The economy is strongly linked to this. Economic depressions causes mental depressions as well. If this isn’t materialism at its best, I don’t know what is.

So there we have it, and internal mal-etre is the real cause for racism. But because it is unrelated, even if racial issues could be solved, like by closing the borders, it would not cure the primary cause for this mal-etre.
Therefore, it’s wrong. Not the “god say it’s wrong” kind of wrong, but the “1+1=3” kind of wrong.

Now let’s move on the pragmatic reasons why it’s wrong.
THe best way to realize it, is to live it. I have been the target of racial hatred as a white person in Japan, so rarely that I only remember one case. But still just this one case of getting insulted of “shitty foreigner” shocked me to the core. But that’s one thing, the pernicious issue is in the details, we are segregated as forgeiners in any country. You don’t have the right to vote, you need a special id-card that’s not the same than normal citizens. And in a homogeneous country like Japan you stand out a lot, so people behave according to cliches and prejudice, which is deeply annoying.
I am addressed to in English right out of the bat, or not even spoken to at all, with people using hand waving signs and expressions to communicate, rather than Japanese which I understand very well.
Some kids try to shout “hello” at me in the street, they mean well I’m sure but it’s totally wicked.
Take the opposite situation in Los Angeles:
An asian-looking guy walks down the pavement, no American would dare shout “konnichiwa!”.
Why ? Because you don’t suppose the asian-looking person is indeed a foreigner at first sight, you suppose they have Asian ancestry but are American citizens so you address them in English normally.
Shouthing “konnichiwa” would be derogatory and dangerously border line. The person could even be Korean or Chinese and the greeting would fall flat, even will become horribly negative because these countries don’t like each other.
It’s like shouting “guten tag” to an Italian because he looks caucasian. He’d be like “wtf” ??
Another quite sad observation is the discrimination in lodging, the owners of the apartments decide who can rent and who can’t, based on if they tolerate foreigners or not. And that’s not even illegal here.
I can tell you that getting the real estate agent telling you “you can’t live here” because the owner said so, feels infuriating. It’s like being told that you are a lesser piece of shit.
I also got police control, 2 times. That’s not nice to be assumed suspicious by default.

So all these little things are very upsetting and disturbing, and taints the experience of living abroad.
Realising how it feels, is the only true way that one can understand how racial discrimination is an absurdity and total wrongness all and by itself.

The third reason, is that no country is uniform. Each time some French say “I’m a good Gallic”, I’m saying to myself that he can’t be that stupid ? The country is made up from dozens of barbarians invasions from the South, the North, the East and watnot over the centuries. It’s the same everywhere. Japan was quite protected on its Island, but humanity could not spawn of of god’s ass in Africa and Japan simultaneously, so any human being anywhere today is just a human, who migrated from the source of origin anyway. Hitler thought Ayrians were superiors, so if they are, Donald Trump cannot be superior too ? Mathematically there could be only one superior. Since everyone claims his own superiority, in truth no one is.

To conclude,
When did it all already happen in the past ?
world war 2.
So where do you think this could be heading ?
should we be afraid ?

Light rendering (2)

Hi guys,

part1 : light rendering on maps

Allow me to post a follow up, on the implementation of light mapping in projet SERHuM.

I am doing final gathering right now for global illumination, it is totally broken still, but definitely we observe some results.

Capture3

On this image I set the sampling to 90% stratified to exhibit the bugs better. But we can see that the light that should be coming from the diffuse reflection of the wall on the right is definitely flowing somewhere at least. And we can see some beginning of truth in the indirect occlusion behind the cylinder.

Here is a schematic drawing I made in an attempt to explain how the final gather algorithm works:

lightmap_finalgather

The grey dots are the photons existing in a “floating cloud” which is called a photon map. Personally I decided to spawn them out of the lumels I previously lit in a first pass.

The second pass would be to create indirect lighting by going from the lumels again, in all directions (primary rays in red), gather k-nearest-neighbors (KNN) photons around the hit position, and tracing back to the starting lumel. If the secondary rays hit something in the way back, they are shadowed, if not, they can add some illumination. We use a double Lambert N dot L, one with the photon normal and one with the origin point normal.

Unfortunately, this algorithm is pretty slow because it costs O(lumels × samples × photon_density × average_radius) which is damn expensive. But I have hopes of being able to apply some tricks like cone tracing later on. First if this works at all, this will be a nice achievement in itself.

Let’s see an image computed with full random primary rays (or so I think):

Capture4

There is one thing going well in this image, and that we expect:
The darkening of the zone behind the cylinder. This is a soft shadow because the wall on the right is an area light, so that’s ok.
But the ground is very unexpected, because it’s dark, it has no reason to be darker than the left-side wall. It has a slightly steeper angle I concede, the left side wall is straight facing the right, so lambertian are closer to 100%, but even though the ground is tilted, it should get an average of 45°, (√2/2)² = 50% energy. Here we have almost zero on the left (where the lambertians are actually higher) and some noise on the right, where the lambertians should be close to zero because of steep incidence to the rays coming from the wall. Weird. That’s a bug somewhere.
Also the banding we see on the rooftop and the cylinder, that’s sign of some spurious self occlusions ?

to be continued…

PS bonus: a capture of the photon cloud with cluster coloring. (Cluster = one spatial cell for the KNN lookup)

photocloud

Windows and paths

Microsoft doesn’t get it…

11 years late to the party (in 1981), Microsoft arrives and lay a path identifier system mess.

C:\stuff\file.ext

So first thing they do different is to make use of backslash as path separator.
The world was using slash, so maybe they said to themselves “hey let’s be trendy and invent something similar but different”.
Thank you for the MeSs, MS.
They also decided it would be convenient if slashes were supported also, so they said they should be equivalent.
Here the true story : http://blogs.msdn.com/b/larryosterman/archive/2005/06/24/432386.aspx
They even knew it was bad, they fixed it, and then they kept it. …wait wat ?

Except slashes and backslashes ended up not being equivalent, and most times backslashes are mandatory, in the console if you want completion, in command line arguments you often get problems with forward slashes, and to locate network paths it never works with slashes, e.g. \\othermachine.
Of course, the drive letter is wrongness incarnated, first it limits them to 26, and if you want more, forces you to use mount points, thus creating a first inconsistency. If something is not simple and elegant it should be thrown away and refactored, right now.

How do you relate paths between two different drives ? you just can’t.

in Unix you would simply do:

/media/c/stuff/../../d/otherstuff

in windows:

c:/stuff/../../d:/otherstuff

except that’s forbidden. that’s right, this style of path is forbidden, they explicitly rule it off. good job again MS.

If this mess wasn’t enough, they decided that paths were historically limited to 260 letters and that was too small, and we should have a way of having more.
Instead of just doing so that we can have more (simply), they said, to use more, use the a special long path form, with this prefix “\\?\”
Hem.. they never learn do they ? if it’s not unified it’s just increased costs for support all over the world. And guess what, nobody bothered.
Not even them, they don’t support this form in 90% of their own API. Good job MS ! And the irony, is that even in this form there is a limit at 32767 characters, not even unlimited.

Of course, it doesn’t just end there, in 1993 they wanted to allow non latin characters, so they thought they will encode characters on 2 bytes each instead of one, and they implemented a crappy UCS16 which of course cannot support all of the world scripts since there are more than 65536 of them, and of course, it makes the old form and the UCS form binarily incompatible, and also endian sensitive. Good job MS !

Unix chose UTF-8 which has many advantages as listed here http://utf8everywhere.org/
In Unix, you don’t get bullshit long form prefix; you can make relative paths cross-drives; you don’t get special prefixes for network paths; you get binary compatibility for any language in the world, and of course, only forward slash is used as a separator. Which allows the backslash to be used as escape character like it should be, or regex escape. Escape in Windows is just… you guessed it.. a mess ! yes. You need to use quotes and you always need to try it thrice before getting it right.

That’s not even the end of it just yet, because the drive-first-mandatorily form was an inconvenience, they recognized the superiority of the Unix form, and in NT you can specify \Device\HarddiskVolume and network and even pipes in a unified fashion, way to go MS ! But… this is mostly internal and anything not-a-driver or working with NtQueryObject cannot use these forms… gg.

So once again, you get a system-over-system-over-system anti pattern, redundancy everywhere, incompatibility, emulation, legacy, and a stinky smell of crap everywhere. This, is MS’s world.

Light rendering on maps

Or also said “lightmaps”. I have finally came to try to implement a full fledge lightmapper for projet S.E.R.Hu.M.
projet S.E.R.Hu.M. is my highschool (2002) attempt at copying the Valve’s gold engine, and make a game out of it.

I’ve never come close, but I still like to develop on its codebase for experiments, or just random progress on it. It is a piece of art, like a sculpture, but that would take a lifetime to complete.
I always wanted to make this, it was one of the main excitement perspective I had when I first started the project “oh yeah when I need to make lightmaps, juicy tech in sight !”
But in 2002 I had no idea how to process radiosity, and I thought a direct lighting raytracer would be just enough. And it could. As long as you manually place lights everywhere, like probes.

But, now that I am an educated senior graphics programmer, I have no problem grasping some algorithms, notably Henrik Wann Jensen’s photon map approach, with final gathering.
As you can see if you follow the link, this method dates back 1996. Many newer crazier method have followed, the one I’m using is actually a later variant, but still from about the 2000’s.
Today, we have Metropolis light transport, augmented with low variance estimators, implanted in stochastic path tracers; and the whole thing runs on GPUs. Pretty crazy stuff.

Today we have a myriad, power 10, of crazy, impossible to understand, graphics rendering methods:
http://cseweb.ucsd.edu/~ravir/papers/invlamb/josa.pdf
https://www.solidangle.com/research/egsr2012_volume.pdf
http://www.tnw.tudelft.nl/fileadmin/Faculteit/TNW/Over_de_faculteit/Afdelingen/Radiation_Radionuclides_Reactors/Research/Research_Groups/NERA/Publications/doc/PhD_Christoforou.pdf
https://people.csail.mit.edu/fredo/TOG/tog.pdf
Some are easier…
http://www.crytek.com/download/Light_Propagation_Volumes.pdf
https://research.nvidia.com/sites/default/files/publications/GIVoxels-pg2011-authors.pdf

And that is all very well, but I will not implement something I don’t fully get. I have actually implemented LPV, they can be seen in a product called LumenRT 2015.
Check them out: https://www.youtube.com/watch?v=dBxMCdujdUw

But, I didn’t want to redo a tech I already implemented, so I went for my old target, lightmaps. This way I get to implement final gathering, yay !

First I had to do a mesh parameterizer. This wasn’t very easy, it was fun, but I did a crappy job, mesh parameterization is crazy hard. So I did an ad-hoc tech that works well with blocky architectural designs that we get out of Worldcraft (sorry Hammer).
I decided to make a database of individual triangles, each would bear information on its surface, and maximum edge length. Then I regrouped similar pairs together, with preference if they actually share an edge in 3d. This would give me a list of quads.

Then, comes the packing, I took this idea, it worked awesome. Plus some personal pepper on it to make it more adapted to my case, for the final seasoning.

Now we’re ready to render stuff. I took the approach of visiting the lumels from the lightmap, then I would reproject the lumel in 3d by interpolating the coordinates from the vertices. From this 3d point, I can finally do actual lighting; this is where embree comes into play.

Embree is freaking awesome. This is a beautiful piece of software that is made by intel to run fast, on intel architectures. And fast, it is. I managed to get 19 million intersections per seconds (corei7) in my use case without working on packaging rays or streaming rays (and another paper here), at all.

I quickly had direct lighting, with broken results at first. Notably all black. Then I got some black and some white, then some stuff that looked ok mixed with weird black seams. Then I managed to get it to work completely.

Capture8

This is a view of one of the firsts results I had, This shows my classic warehouse scene with 3 or 4 spot lights at the ceiling.
We can still see what seems to look like a bug, the top iron seams are very bright. This is because their triangles are too stretchy, my sorting algorithm decided to ban them, I intend to treat this kind of geometry per vertex later.

You can see the difference from the flat lighting I had before, this is what you would get, without lightmaps:

Capture1

Some other spot light view from inside the tall observatory stair case:

Captureb

I am not sure if the attenuation formula is right. This is not easy to get, because of non physical units used, and the fact that infinitely small lights makes no sense, so how do you design a formula that makes sense ? Will all formulas I used to see, the light intensity is infinity, at light position, then after 1 meter, it becomes “original artist light value”, or if you are lucky/unlucky, it could be 1 meter divided by Pi. Why 1 meter ? because intensity = lightcolor / distance. (or distance squared) You see that intensity is equal to lightcolor when distance is 1. So in world units, if your unit is a meter, it means you attenuate from 1 meter. What if your unit is not meter ? your attenuation varies. THAT is the pain in the butt. This smells arbitrariness to me. One day I’ll sort this out.
Until then, I use a contraption, some empirical technique where the artist specifies from what distance in world he/she wants the attenuation to be 95% (so 5% energy remaining). In the middle I use a distance squared curve, because that’s the most physically correct.

However, you find a lot of renderers that use linear attenuation. I now know why. This is because in the past, we never use gamma correct color encoding. We made all lighting computations in gamma space instead of linear space. Which is a total mistake. It breaks everything. Of course, now that I know, I didn’t make that mistake.
I even went so far as to create a color class that can store its current working space, and convert from one to the other on demand. It will pop some asserts in case of mixed operands during computations. Yay !

The final goodies is the lighting from the sky. This is much more interesting than plain stupid direct lighting. I made a monte carlo sampler into a cube map I prepared with cubemapgen, that pre-bakes irradiance. However, one does not simply evaluate the ambiant occlusion of a lumel. This is where the monte carlo sampler plays its role. It sends many random rays towards the sky and count how many passes. Many means I can take the cube map sample almost as-is. Few means we lay in the dark.

Let’s see some images

Capture12

Here we see the effect of ambient occlusion, the parts indoors don’t get light from the sky.
You can also see the noisy grain, this is due to the random sampling. I experimented with stratified sampling and got some results, but I also have banding. I am not sure which artefact I prefer !

The same image with 4000 samples per pixel:

Capture2e

Unfortunately this level as it is, takes about 1 hour to compute at this quality of sampling. Not good. I need a drastic cut. My target is one minute per level.

Now let’s see simply more images with some comments to go with them.

Capture33

This is an example of how smooth the lighting gets with 4k samples per lumel.

Capture19

This shows noise in the random sampler.

Capture17

This exhibits the seaming problem everybody has eventually with a mesh parameterizer. Mine is particularly bad, so I get particularly bad results.

Capture15

This is 50% stratified, so we get some noise, but… not fully randomly.

Capture13

Here is 100% random, we can clearly see the grain on those otherwise clean walls !

Capture1a

Nice ambient occlusion effects, in the test map.

More to come !

On the importance of Air Travel

The great petrol shortage issue.

Here is my stance on what humanity should do to preserve its level of world connectivity and speed and convenience of travel and richness of merchandise exchange, for the next centuries.
There is currently a huge fuss over nuclear power, and the need for green energies, and sustainable development.
Of course, everybody or almost, simply agrees to the fact that this is good. I also do.
But, we are missing the point. The goals are dual:
1- not die.
2- continue to live, comfortably if possible.

Goal 1 is related to specie survival. Humanity. Just that. This is because global warming is not a joke, and has the potential to seriously threaten hundreds of species which in turn would de-stabilize the ecosystem and our food chain.
This will affect prices first. Because of more hail, more heavy rains, and more brutal storms, agriculture will suffer heavy crop losses. We can counter those losses by paying ultra expensive protective green houses and the likes.
If this is not done, and crops are lost, supply will have to come from another place, which will be in the state of over-demand, and the prices will rise.
This is to put in addition with fishes disappearing because of Oceanic acidity increasing and cause a wide food chain rupture.
In short, the riches will need to pay their food a hundred times more, and they will be able to afford it. And the poor will die.

This is already too late to reverse, unless we actively find cheap solutions to remove the CO2 from the oceans and the air as well to a lesser extent.
In the meantime it would be wise to not worsen the situation by stopping our fuel consumption, now. And also on another front, replant all the forests where applicable.

This needs to take the shape of a massive and immediate transition to hydrogen power, generated thanks to nuclear power.
Allow me to develop. Nuclear power generates local heat, and steam : water vapour. Plus, once a year, a good ton of radioactive material that needs to be plunged into the decay pool. Then 10 years later vitrified and burried.
People need to understand, this is an extremely little amount, compared to the power produced. And The whole cycle is CO2 neutral, it does not impair the atmosphere. Which is the vital part.

Check this out:
http://xkcd.com/1162/

This comic strip should give you a sense of why Uranium is good, and coal is bad.

Goal 2, relates to the problem of diminishing standards. When petrol is no more, we will have to give up on many things.
Computers will probably still exist, because the pieces can be transported by boat, there is no urgency. And boats can run on steam, or wind like back in the days, or batteries like submarines, or nuclear power like aircraft carriers.
But we won’t have coconuts no more. Nor bananas for that matter, or strawberries, oranges etc. god knows the size of the list.
Taking the bicycle to work is doable, but when it rains its not going to be a happy day.
Moving all your stuff when a family moves home will require an expensive transport service that will operate with what’s available.
Constructing a building will cost tremendously more because of the need to convey concrete and heavy steel pillars.

Now if you are convinced, let’s continue to the main thesis, transition to hydrogen. Because hydrogen combustion engines work the same than current petrol combustion engines, the industry transition would not be as painful as, say, everybody back to cycling.
Two main transportation modes should transition : ground and sea.
Because boats do not crash, and cars are small devices (relatively) that stay on the ground, this is the safest place to start.

This in turn, would allow to let the sole industry given the right to use petrol to continue to function : Air transport.
Planes cannot function on hydrogen, because it would mean to try to patch tanks so that they never leak. Which at the scale of the machines we are talking about, is very difficult.
It would also augment the weight too much due to necessary re-enforcements.

This is why it is of ultimate importance :
Air travel, is the only thing that allows to go to the other side of the planet, in less than 2 days. It could even be a direct flight between two big destinations. Only a few hours to connect continents !
This is grand, this is actually the grandest thing humanity has ever had.
Air travel can get your mail delivered in 48 hours, wherever. This is plain crazy.
It can allow you to travel anywhere in the world in the space of your two weeks vacation a year. This is crazy, and we need to realize it.
It allows people to eat coconuts in Nordic countries.
This is key to worldwide stability and mutual understanding, as little as it is today, of other cultures by people who dare to travel. This is also key to multiple businesses. This fast material world connection is of ultimate importance, and it needs to be saved.
It also is the only way to rescue Alpinists, or connect boats to icy regions like Siberia or Greenland. It is thanks to it that we have google maps. The list goes on !
It can only work in world where petrol exists for it.
When petrol is going to cost a hundred dollars a litre, and this will happen, nobody will ever be able to fly again but a handful of super riches. This is not desirable for the good of humanity.

Therefore we need to spare whatever we can from now on, and dedicate it all to Air transport, and switch everything else to something else, hydrogen for example.

Is our knowledge base illusory ?

Is everything we believe in, illusory ?

No, that's my short answer.
Allow me to elaborate.
First, why even incur the question at all, I think it is because lots of people feel the need for a proof of existence.
This manifests itself in Descartes "cogito ergo sum". And can be found recurrently in many medias.
"The matrix" movies revolves entirely around that concept, and it grossed a total of 1.6 billion dollars. Which is a good image of how it matters to us.
Further, the song from Daft Punk "I remember touch", speaks of this fact again.

These metaphysical schemes are applicable to a less generic-life oriented, and more concrete question "are the things I believe in, false?". Which would refer to one's knowledge.
We find common knowledge to be refuted throughout the ages, like Earth's flatness. Or the Sun's rotation around the earth.
And furthermore, how about intentional lies ? Some forces could maneuver the world's knowledge to steer it into the direction they want, and we absolutely witness this kind of things in what we call "propaganda", or "disinformation".
Product commercials, advertisements, political extremes arguments fallacies, or organizations (like Greenpeace) shouting exaggerated and distorted facts, omitting voluntarily the contradictory facts to bury chances of discussion, and make their case stronger...
All of this can impair one's will to trust this world, and one's own knowledge base, because all that we learn, come from outside, or prepared conceptions. Prejudice etc..

But, I am convinced, forces in humans psychology exists, that makes the tendency to seek truth, stronger in average. Which if generalized to all of what we know, should result in a global mass movement of knowledge and information, tends to some degree of truth.
This tendency I speak about, is the compelling force a mathematician feels when seeking for the purest proof for a demonstration. Or for a theorem.
Achieving a beautiful proof is like enlightenment for a mathematician. Its like becoming Buddha. The ultimate goal of any craftsman, engineer, or scientist, is always to achieve perfection.
For example, as a programmer, it feels the best when I manage to write the shortest automaton that solves a problem. The clearest to read and convey the intent, the most robust and generic, without handling of special cases in separated paths.
It is the same for physicists, the all want to unify the laws of interactions into the big one, unique law of everything. Because this is purer, more satisfying.
I am not very familiar with the literature world, but I'm convinced anybody dealing with creativity, also seeks this kind of perfection. Who wouldn't want to paint a picture so realistic that one couldn't distinguish it from a photograph ?
Or not necessarily realistic, but in the abstract arts as well, or poetry, or simple novels, don't they feel better polished when the vocabulary is wider, repetitions fewer, and the style lighter ?

Doesn't most journalists seeks to expose the truth ? Photographers try to expose the quintessence of subjects ?

Now, on the other hand, what kind of gratification one feels when stating a lie ? I bet not much. Lies are uncomfortable. They are hard to allow face keeping, they crumble easily, unstable dangerous. They can bring shame...
And doesn't it feel so good when you state a truth one day, if its verified and you can tell "I told you so" ? This is just inherent psychology to humans. Truth is easier, more comfortable, and gratifying.
Only people with dysfunctional education work with an unbalance at this level. Let's take the example of the Cardinal of Mazzarin. He wrote in his manifesto, that he decided that truths or lies would be equal, as long as any serves his interest better.
This kind of personality, is in-fact rare. Is it the result of some global education fallacy ? Have we all humans been manipulated (hear educated) into this way by some higher non-caring authorities, like Mazzarin, to keep us under easier control ?
Possibly, but there are reassuring proofs of the opposite, like monkeys feeling guilt. Also we have often witnessed that unnatural educations tends to create discomfort and instabilities, like the strict education.
Desperate Housewives character Bree is dysfunctional in the way she interiorize (hide) troubles, and this spreads difficulties and unease all over her family.
This is also covered in the movie "The Tree of Life" where a strict father leads to the death of his son.

Now to get back at the original thesis, if we take all of the above-stated as hypothesis, the conclusion is necessarily that truth prevails. Therefore our knowledge should deserve our trust, in general.
Beware, this is no excuse for not doubting. I stand by the old principle of always doubting. This is also doubt, that allows truth to break through. Like a genetic algorithm tries possibilities to find good local maximals, doubt allows this natural selection to improve over itself, so always doubt.

Mircrosoftus defectus

Les tristes défauts organisationnels de Microsoft, et le malheureux impact qui s’en suit : la majorité des utilisateurs de PC de la planète sous-utilisent leur matériel.

Dans l’article suivant:

I Contribute to the Windows Kernel. We Are Slower Than Other Operating Systems. Here Is Why.” (http://blog.zorinaq.com/?e=74)

Un développeur de chez Microsoft nous explique qu’en gros, les choses se passent exactement comme dans toutes les entreprises de software : plutôt mal.

Et que cela explique pourquoi la personne à qui il répond a remarqué que :

…generally acknowledged fact that Windows is slower than Linux when running complex workloads that push network/disk/cpu scheduling to its limit

Je vous laisse lire le tout, c’est édifiant, vous pouvez aussi suivre l’article original sur hackernews lié par l’auteur.

Donc, chez Microsoft, il y a plusieurs défauts : notamment le fait de ne pas pouvoir toucher aux parties du code les plus critique (le noyau, le filesystem…) de manière spontanée, car ça énerve tout le monde. Le boss en premier, puis l’équipe de test derrière. Avec de jolies perles bien scandaleuses telles que:

Our low performance is not an existential threat to the business

Incremental improvements just annoy people

…[if you] tell your lead about how you improved performance of some other component on the system, he’ll just ask you whether you can accelerate your bug glide

La conclusion est donc la suivante : Microsoft est devenue paresseuse. Une entreprise sans motivation, qui peine tout juste à innover suffisamment pour se maintenir à flot.

Elle est tombée dans le confort du bug fixing crunch. Malheureusement, en tant qu’ancien débuggeur chez e-on software de Vue 7 (et suivants), un logiciel en C++ de plusieurs millions de lignes, comme Windows, je ne sais que trop bien comment on tombe la dedans. Sauf que c’est de la facilité, de l’oubli, et de la procrastination. C’est dangereux pour l’entreprise car la stabilité se fait au détriment de la performance dans beaucoup de situations. Et pendant ce temps aucune nouvelle fonction ne se fait adopter, et le code continue de grandir sans aucune nouveauté apparente. Et c’est l’innovation qui apporte du cashflow, et non pas les bugfixs. Il faut trouver un équilibre.

Le deuxième problème que je vois est celui de la loi de Wirth : le software bloat.
http://en.wikipedia.org/wiki/Software_bloat
Traduis par l’article Français de la même page par boufficiel, inflagiciel ou obésiciel. Jolis termes je trouve.

Le résultat, 80% de la planète utilise aujourd’hui un système écrit majoritairement en 1993 et ensuite greffé par dessus de tout un tas de chrome et de laque pour faire briller.
Cela ne posait pas trop de problème jusque vers 2004 environ. Car d’autres OS ont continué a évoluer, et surtout se renouveler, dans leur base, le noyau. Ainsi linux a vu apparaître de superbes modifications, comme les ticks dynamiques, le completely fair scheduler, des optimisations microscopiques mais fréquentes dans divers coins reculés, qui sont dangereuses à accepter mais si la communauté n’en a pas peur, elles se font amortir a l’usage et le futur s’éclaircit.

Voila ce que l’on veut pour windows:
http://www.silicon.fr/linux-200-lignes-de-code-qui-changent-presque-tout-42949.html

On veut aussi PowerTOP, un utilitaire mis au point par Intel pour mesurer les taux de réveils du processeur, quand on a un portable ou une machine sur batterie, optimiser la veille processeur est primordial. Windows par exemple tue la batterie 20% plus vite qu’un autre OS puisque personne ne s’occupe de nettoyer le code chez eux.

http://www.extremetech.com/computing/169055-why-do-windows-pcs-have-such-terrible-battery-life-compared-to-mac-and-ios
“That’s Apple’s OS X delivering almost twice the battery life of Microsoft’s Windows 8, on almost exactly the same hardware. Go figure”
Les journalistes ne le savent pas, mais la durée de la batterie est liée à l’efficacité du système a ne pas se réveiller pour un oui ou pour un non. Ce qui est directement lié au bloat justement. Sans parler de la gestion des context switchs, et de la quantité de bloat qui doit être exécuté à chaque tick noyau.

Parlons ingénierie, j’ai déjà été amené à lire des articles de génie logiciel sur le refactoring, et plusieurs personnes intelligentes ont déjà noté que le but ultime d’un mainteneur, n’est pas de rajouter des lignes de code mais bien au contraire d’en supprimer.

Il y a des milliers d’articles à ce sujet par des personnes qui ont réalisé ce fait simple:
http://blog.codinghorror.com/the-best-code-is-no-code-at-all/
http://mikegrouchy.com/blog/2012/06/write-less-code.html
etc etc etc etc

Le problème : Microsoft, 77 Milliards de dollars de revenus par an, plus de 100.000 employés, n’a PAS compris cette règle, alors que le génie logiciel est leur cœur de métier !
Incroyable, donc la conclusion ici est que l’entreprise la plus riche du monde dans leur domaine, ne sais pas vraiment son travail. Ça fait peur !

Mes recommandations pour Microsoft :

Windows peut être sauvé, après tout, ce n’est pas un si mauvais OS. Sauf qu’il va falloir le nettoyer sérieusement.

– Laisser tomber les problèmes de rétro-compatibilité native, quel intérêt est-ce que ca a ? Il est bien plus propre de fournir des machines virtuelles faisant tourner les versions précédentes de Windows comme partie intégrante des nouvelles version de Windows. La preuve flagrante que pour moi c’est totalement inutile, est que ca ne fonctionne même pas. Avez vous essayer de faire fonctionner Duke Nukem 3D sur un windows récent ? Même un core i7 à 3Ghz ne peut pas le faire marcher correctement alors qu’un pentium 75 de l’époque y arrivait.
Heart of Darkness ? Un jeu français des années 90, il ne marche plus sous aucun NT apparemment parce que le processus explorer vient lui casser son mode graphique. CQFD à mon avis.

– Optimiser sérieusement tout ce qui peut l’être dans le noyau. Surtout les context switch; le scheduler (s’inspirer du Completely Fair Scheduler de linux serait indiqué); l’allocation mémoire sur le tas doit être entièrement recodée et le code de l’ancienne, jeté. Utiliser des algorithmes lock-free partout où cela a du sens, comme l’allocation, ou les opérations de liste de processus, d’ouverture/fermeture de fichiers…

– Jeter NTFS complètement, et adopter un nouveau système léger, efficace et surtout complètement libre pour permettre l’Universal Mass Storage de bien marcher sur toutes les plateformes. Et par pitié ne jamais laisser un gars du marketing ouvrir sa grande bouche pour dire quelque chose du genre “ah ! profitons-en pour mettre des fonctions de reparsing vers le cloud” ou autre imbécillités de la sorte pour suivre le hype et qui ne ferais que rajouter du bloat. Un filesystem doit être le plus légé possible. Sous Unix on a pas 50 coches de permissions à gérer par fichier/dossier, il n’y a que +x+r+w pour user:group:others. Pourquoi sous Windows il faut une GUI énorme avec 50 cases ? Pourtant je pense que le sens commun dirait qu’un Unix standard est plus sécuritaire qu’un Windows standard. Encore un exemple de bloat inutile.

– Rebrander : Laisser tomber la marque Windows, fournissez la compatibilité windows au travers des machines virtuelles (fournies avec les versions pro mettons).

– Jeter la vieille Win32 API complètement à la poubelle. Faire une nouvelle.
Pourquoi faut il environ 20 caractères sous Unix pour lancer un nouveau programme (fork & execv: 2 fonctions)
Alors qu’il en faut 720 avec l’API Windows:
http://msdn.microsoft.com/en-us/library/windows/desktop/ms682512(v=vs.85).aspx
Non mais sérieux ?

– Complètement virer MFC.

– WPF est a discuter, clairement quelqu’un microsoft voulait encore rajouter du chrome et du poid a l’univers Windows et a dit, “regardez Google ils ont une belle techno d’UI basée sur xml, faisons pareil” et WPF est née.
Et puis elle est morte. Et puis en fait non. Lire: http://www.riagenic.com/archives/963 (“The consequences of declaring WPF is dead”), petit cafouillage.
L’idée de base est bonne, c’est une techno propre. A mon avis il faut garder ce genre de choses mais il va falloir travailler à rendre le simple plus simple. Car avec WPF, le simple est devnu compliqué alors que le compliqué d’avant est devenu trivial. Par exemple faire une interface prototype en 3 clicks était possible avec Forms, mais on ne pouvais pas lui appliquer des “skins” ni la redimensionner a souhait, ou avoir un placement automatique des widgets. Avec WPF l’inverse est vrai.

– Refactorer totalement le modèle des formats d’executables a deux vitesse : un avec fenêtre et l’autre en mode console. Ce genre de choses, c’est du bloat inutile. Il faut unifier les sorties console comme cela se passe sous linux. Plus d’inepties du genre OutputDebugString s’il vous plait !

– Arrêter de renommer tous les concepts adoptés pour leur donner la Microsoft touch. Employez les bons termes bon sang. C’est simple, on dirait que Microsoft découvre l’univers de l’informatique 20 ans après tout le monde. Pondent une roue carrée, et ensuite dans le futur essaient de la rendre a bords arrondis en rajoutant du bloat sur les côtés carrés.

– Jeter le registre. Et vite. Pourquoi forcer une base de donnée gérée par le système pour les applications ? C’est du communisme dans le mauvais sens du terme.

– Arreter COM et autres horreurs de bloat qui ne servent qu’a construire un monopole et faire des jolis buzzword qui sonnent technologique, tout en construisant dans le fond une grosse toile d’araignée pour que les développeurs utilisent trop facilement des fonctions qui sortent des frontières de leur système (mettons .net) et deviennent d’un coup dépendants de la plateforme sans s’en rendre compte.

– Jeter Power Shell, c’est aussi du bloat. Il aurait été bien plus judicieux de faire une console flexible avec des implémentations ouvertes aux third parties. De la même manière que zsh peut remplacer bash. Et que rxvt peut remplacer gnome-terminal. L’intégration serrée avec .NET était presque une bonne idée dans le sens de l’unification, si seulement un nouveau langage de shell, affreux qui plus est, n’étais pas apparu avec !

– Visual Studio doit être refait, trop de bloat, trop de lenteur, trop de blocages interminables qui entrainent le système entier avec eux pendant de longues minutes. Et parfois, seulement pour attendre le time-out d’une critical section globale dans le système de rasterization des polices (!! foutage de gueule puissance 8 Microsoft…)
Le compilateur est excellent, il faut le garder, le deboggeur aussi, a garder. Mais intellisense il faut arrêter le massacre, arrêter les idioties liées à l’interface. On en a rien a cirer que ce soit rendu avec WPF en Direct2D, ca bouffe du GPU et ca cause des blocages, personne n’en veut.

– La nouvelle .NET 4.5 est une bonne chose (bonne JIT), a garder, C# aussi c’est presque propre et ca poutre bien, a garder. La standardisation doit être poussée un peu, bien qu’elle soit déjà en bonne voie aujourd’hui.

– Windows RT, encore un exemple de Microsoft dans leur position de copieur avec complexe d’infériorité, ils ont pris les concepts d’android : sandboxing, activités plein écran, permissions par application, tâches empilées… et tentent de rattraper le train sans rien inventer, a part Metro.

– Metro est la seule invention de la part de Microsoft depuis… ouais. Personne ne l’aime malheureusement. A part les designers idéalistes qui trouvent que le skeuomorphisme est une aberration, et que le temps cerveau passé a décrypter des icones unicolores et symboliques est gagné pour d’autres choses “productives”. Ca à l’air beau comme ca, mais le résultat est affreux et contre productif. J’espère de tout mon moi que ce mouvement va vite crever.

Microsoft tombe dans un classique, la peur du refactoring. La peur du changement. Microsoft est dans une bonne position, ils ont un cash flow énorme qui peut leur permettre de se jeter dans un grand cycle de développement comme ce que mes propositions impliquent. Et s’ils ne décident pas rapidement de se jeter a l’eau, ils vont couler doucement emportés par leur monstrueuse agrégat de bloat qu’ils se traînent et qui ne va bientôt plus pouvoir flotter. Car les gens en ont marre.

Voir les grands nombre de régression dans l’user experience, par exemple entre Vista et 7:
http://variableghz.com/2012/01/why-windows-vista-sp1-is-better-than-windows-7/

J’ai l’impression qu’il n’y a pas une personne avec la tête ailleurs que dans les réunions et dans leurs mails pour de temps en temps regarder leur produit, et dire “hey ho…vous faites quoi la ?”.
Ca n’est pas sérieux, en réalité, si on les regardait travailler on penserait surement “oh, des gens sérieux, en costard et tout”, en fait non leur monde corporate est une dystopie, un exemple d’école même.

J’espère secrètement que Microsoft va mourir de sa belle mort, que plus personne n’achètera ce qu’ils font, et qu’ils seront coincés dans leur position par leurs années de corporatisme frileux, et nous seront enfin débarrassés du bâtard a trois tête qui pourrit notre monde. Alors des spins offs démarrés par leurs employés les plus entreprenants pourraient enfin avoir la liberté de nous faire de bonnes surprises, qui sait ?