HanishKVC’s General Blog zone

May 17, 2019

Curious case of Google Product updates or lack there off, QC checklist missing

As Android devices have the super duper record of keeping things up-to-date promptly, I decided against going for a android tablet. As it is a wasted dead end wrt updates.

So instead I went with Chromebook Tab 10, because

  • it is a decent enough Tablet with ChromeOS,
    • so guarenteed support for 5 years
    • security as well as new versions, well chrome os good in that sense
  • Now Chrome OS supports Android, so I can use and or experiment with android apps on it.
  • Now Chrome OS as support for Linux (including on this device), so I can do my linux related experiments on this
    • well except for linux kernel level stuff (unless I don’t mind running a non-kvm emulator inside the container)
  • And with a decent bluetooth keyboard, it can become a laptop (albeit bit bulky, but then again, only if I want it to be a laptop).

1.I) ISSUE 1

But good luck or bad luck, I ended up with a strange issue that after the 1st update for the device into v72, I never got the next update. Even thou other Chrome OS devices seemed to have been updated.

I searched and finally found a online location, which tells which chrome os device has what update. And inturn a way of checking if everyone is on the same boat or only me.

https://cros-omahaproxy.appspot.com

And strangely enough it also seemed to indicate that there is no update available for Tab 10 and few other OP1 based devices.

But then again, this didn’t make sense at one level, unless Google was reneging on their promise of guarenteed (TIMELY) updates. If not timely whats the use at one level.

So did further search and found, ok there is a Chrome OS/Device recovery app. And lo and behold, it is not supported on linux directly (seriously, a linux device related helper recovery gui not supported on linux, but all other OSs), but luckly they do provide a linux host based helper recovery script.

https://dl.google.com/dl/edgedl/chromeos/recovery/linux_recovery.sh

Or rather get the latest link from (Recover Your Chromebook support/help center page)

https://support.google.com/chromebook/answer/1080595?hl=en

And as strange as things could be even thou the Device itself and cros-omahaproxy was telling that the latest available version is v72, the recovery tool was properly downloading a v73 image as the recovery image for Chrome Tab 10.

1.F) FixForIssue1

The above being a contradicting and odd situation. Looked into the meta data in the recovery image, and it seemed to indicate that the match filter specified in the update image and the value of corresponding field on the target don’t match one another.

Tried contacting google on twitter, didn’t seem to have helped. Missed out on the related Bug in the Bug list. Luckily few days after this, google team seems to have found the issue on their own also.

https://bugs.chromium.org/p/chromium/issues/detail?id=951376#c11

And finally Tab 10 jumped throu v73 and into v74 in quick succession.

2.I) ISSUE 2

Now I find a different issue, the signing key used for signing google’s linux related apps/packages/distro expired sometime in the last 1 month and they had not planned on transitioning to new key well in time. This also means one is not able to update the linux distro provided by them, with in crostini the linux maya of Chrome OS in a safe way.

On Top, According to info in their bug list, they seem to have fixed the issue by re-signing all packages with new keys. But something seems to be amiss in their system, because for a custom container I created with-in crostini, the bug persists. While the default Penguin container is fine now. Have to debug this further later.

However if I use linux distro images from other sources, I potentially escape the problem. But then again, I wanted to depend on Google, but, but, but…

2.F) FixForIssue2WrtCrostini+

The problem seems to be that containers created using run_container.sh as well as any previously created google provided default debian/stretch containers will refer to the below server path for their metadata and packages in

/etc/apt/sources.list.d/cros.list

deb  https://storage.googleapis.com/cros-packages stretch main

And the packages (and metadata, if debian validates them, haven’t checked, but ideally it should) in this path are signed using the old (and no longer valid) key. So apt-get update fails.

However containers created using vmc container testvm testc refer to a new server path for their metadata and packages in

/etc/apt/sources.list.d/cros.list

deb  https://storage.googleapis.com/cros-packages/74 stretch main

And packages (and metadata …) in this path seem to be signed using the new key. So apt-get update flies smoothly.

Google bhai, behan, maibap, please provide info about such intricacies somewhere, when fixes are done. Otherwise people have to scratch their head and or dig around unnecessarily.

2.N) Additional note

The default Termina VM created by cros has lxc remote name google pointing to https://storage.googleapis.com/cros-containers

While any additional VM created using vmc start testvm, seems to have lxc remote name google pointing to https://images.linuxcontainers.org. However run_container.sh automatically seems to reset name google to proper url, not sure if vmc container also does the same or not, before creating a new container. Have to check.

Have a response on the buglist, where a googler has informed that run_container.sh is deprecated, so to use vmc going forward or else lxd tools directly. If google could have added a message to run_container.sh itself telling its deprecated would have been great. Also do remember that if creating a new container using vmc or lxc, from any image other than the one provided by google, then it may not have the cros related integration packages by default. One may have to do it manually. Have to check this.

Also one more thing for now is that Linux on Chrome OS is still in beta, hopefully by the time is gets a proper release, the experience will be more streamlined, documented and even better (for me that may be the day, when I can create a VM, with-in which I have control, for now the vm within crostini is locked down almost fully).

???) Oops – smalltalk – c++ – minix – Oops

Are we today prioritising the required things and or do we have bandwidth to prioritise the required things is a question we require to think about. Before overly assuming and depending on others to have done the right things. But then again in a cloudy cloudy ai ai ai-cloudy world ???

April 6, 2019

Tablets And Android And Chrome OS++

There is a Decent OS out there which can run from a smart watch to phones to tablets to laptops to computers to servers to cloud to … and its called Linux.

Equally it can support user interactions using commandline to graphical interactions to auditory commands to gestures to maya to …  inturn over a serial interface/screen/goggles/keyboard or mic/speaker or network/cloud or any combination there off …

And then from A – I (from ai to yeh hai) to P – I (piy hai)

Google uses it at the core of its Android and Chrome OS among others, but still refuses (or rather had been to a long time) to give it the place it and its facilities deserve. Thus unnecessarily curtailing the flexibility it can provide to its customers.

In its eager ness to provide a new paradigm of interactions with/between users as well as logics/stupidities/programs/…, Google fully blocked out the equivalent alternates from the user layer soliders of its platform and inturn its users.

And also may be ran out of time in providing similar stuff between the product developers and its alternate paradigm distribution ie Android.

Thankfully it has Atleast retained some sense with ChromeOS, ensuring that majority can move forward has new refinements and or stupidities are let loose into the world, as well as keeping a fighting chance to keep the cheapo devils out.

So ChromeOS gets relatively synchronised and for sufficiently long duration, updates (both feature and security) unlike in Android (except for the partial few trebblings here and there slowly). And now they have also (thanks to themselves and the wider open source community) moved along from the initial restricted to controlled web paradigm based apps (for other developers compared to themselves, where required) to Android to finally even Linux now, still in a relatively controlled & restricted mayas of mayas.

So it doesn’t make sense for anyone to go with the stupidly severly restricted (from possibilities of logic/stupidity oceans one can dip into) and let loose in the wild “Android” based tablets at one level. Where one will be stuck with …

If only Google comes out of its current obsessive blinded mold of pushing only high end vehicles of usage of its efforts/services (that to in places where many can afford what ever it pleases) and start serving the common among us across the world …

 

 

 

 

February 28, 2010

Android AOSP for G1 ADP1 HTC Dream

Filed under: Android,Blogroll,linux,OpenSource,technology — hanishkvc @ 10:56 pm

Android AOSP for G1 / ADP1 / HTC_Dream
v01March2010, HanishKVC, Feb2010
===========================

This document gives the Steps required to build Android AOSP for ADP1/HTC Dream/G1 phone.

>> Current status: Basic kernel (boot), wifi and system image is working.
Not sure of 3D and Calender,Contact(with Google sync) [Googleloginservice?] <<

Building Android Donut release using
the Ubuntu Karmic (9.10) i386 as the development / host machine
————————————————————————————————

***N*** Preparing the system

We require to get java5 jdk, which is required by Android, installed into Ubuntu 9.10. But by
default 9.10 comes with Java6, so we pick up the Java5 from 9.04. To do this we require to
add the below lines to /etc/apt/sources.list as a root user (or by using Software sources gui).

Also we require to setup a udev rule to help with debugging of the Android devices,The example
below is for HTC devices, for other manufactures, replace with appropriate vendor id.

$sudo bash
# echo “deb http://mirrors.us.kernel.org/ubuntu/ jaunty multiverse” >> /etc/apt/sources.list
# echo “deb http://mirrors.us.kernel.org/ubuntu/ jaunty-updates multiverse” >> /etc/apt/sources.list

# echo ‘SUBSYSTEM==”usb”, SYSFS{idVendor}==”0bb4″, MODE=”0666″‘ >> /etc/udev/rules.d/51-android.rules

# exit
$ sudo update

Next we require to Get the required build and support applications installed

$ sudo apt-get install git-core gnupg sun-java5-jdk flex bison gperf libsdl-dev libesd0-dev libwxgtk2.6-dev build-essential zip curl libncurses5-dev zlib1g-dev

***N*** Getting the repository and the code, setup

We require to install the repo program, which is used by Google as the application to manage their code repository and work flow.

$ mkdir ~/bin
$ export PATH=~/bin:$PATH

$ curl http://android.git.kernel.org/repo >~/bin/repo
$ chmod a+x ~/bin/repo
$ mkdir ~/work/aosp_donut
$ cd ~/work/aosp_donut

$ repo init -u git://android.git.kernel.org/platform/manifest.git -b donut

NOTE: If you want to work with the master branch, then don’t give -b option to repo init
NOTE: However I haven’t had success with master build as it stands on 22Feb2010, have to check why.

>>ALTERNATIVE FOR Cyanogen’s repo, haven’t tried yet<< $ repo init -u git://github.com/cyanogen/android.git -b donut

$ repo sync

***N*** Getting and preparing the proprietry stuff from HTC

FILE1: htc-adp1.sfx.tgz

This file is available has “HTC Proprietary Binaries for ADP1” from
http://developer.htc.com/

Download this file and copy it into ~/work/aosp_donut/vendor/htc/dream-open

FILE2: signed-dream_devphone_userdebug-ota-14721.zip

This file is available from
http://developer.htc.com/adp.html

Download this file to the root of your android repository i.e ~/work/aosp_donut

$ cd ~/work/aosp_donut/vendor/htc/dream-open
$ tar -zxvf htc-adp1.sfx.tgz
$ ./htc-adp1.sfx

Note: Not sure if htc-adp1.sfx is required, because looking at unzip-files.sh, it seems like it should work even with out this??? Have to check

$ ./unzip-files.sh

***N*** Fixing some bugs in code (rather stricter compiler related issues)

ISSUE 1:
development/emulator/qtools/trace_reader.cpp:1012: error: invalid conversion from ‘const char*’ to ‘char*’
development/emulator/qtools/trace_reader.cpp:1015: error: invalid conversion from ‘const char*’ to ‘char*’
SOLUTION: Replace the char* defs with const char* definitions.

ISSUE 2:
development/emulator/qtools/dmtrace.cpp:166: error: invalid conversion from ‘const char*’ to ‘char*’
development/emulator/qtools/dmtrace.cpp:183: error: invalid conversion from ‘const char*’ to ‘char*’
SOLUTION: Type cast the assignments with (char *)

NOTE: donut_plus_aosp – seems to fix these bugs, but has other large changes also, haven’t tried it yet.
NOTE: Also not sure wrt Video/Audio codec h/w accel optimized modules in donut_plug_aosp

***N*** Building the code

Now that the source code is available and setup with propriotory binary stuff from htc, let us start the actual build

$ cd ~/work/aosp_donut

$ source build/envsetup.sh
$ lunch aosp_dream_us-eng

$ make -j4

Now the generated files (boot.img, recovery.img, system.img, userdata.img) will be available at out/target/product/dream-open/

***N*** Check out this Source compiled User space (system.img) with prebuild kernel based boot.img

$ cd ~/work/aosp_donut/out/target/product/dream-open
$ rm recovery.img <=> In case you want to replace it with your favorite recovery image
$ cp /path/to/recovery_cyanogenmod_amon_ra.img recovery.img   <=>  This replaces the default recovery image with one which you want

Put device into FASTBOOT mode (reboot/power_on the device with BACK key pressed, it should show the driods on skateboard with fastboot text displayed)

$ fastboot devices
$ fastboot erase userdata   <=>   similarly boot and cache
$ fastboot -p dream-open -w flashall
$ fastboot flash userdata userdata.img
$ fastboot reboot

***N*** Compiling android kernel source for Dream/Adp1/G1

*** Get the kernel source
$ cd ~/work/kernel

$ git clone git://android.git.kernel.org/kernel/msm.git
$ cd msm
$ git branch -r
$ git checkout -b android-msm-2.6.29 origin/android-msm-2.6.29

*** Setup the path to point to appropriate compiler tool chain

$ export PATH=$PATH:~/work/aosp_donut/prebuilt/linux-x86/toolchain/arm-eabi-4.4.0/bin

*** config the kernel

OPTION 1: Get the default config options for the kernel from what is already specified in kernel source wrt adp1

$ make msm_defconfig   <=> Rather use the next command to be safe
$ make ARCH=arm CROSS_COMPILE=arm-eabi- msm_defconfig

OR
OPTION 2: Get the config from a phone running android

$ adb pull /proc/config.gz .
$ gunzip config.gz
$ mv config .config

*** Build the kernel

$ make ARCH=arm CROSS_COMPILE=arm-eabi-

The kernel will be in arch/arm/boot/

*** Move the kernel to Android platform source directory

$ cp ~/work/kernel/msm/arch/arm/boot/zImage ~/work/aosp_donut/vendor/htc/dream-open/kernel

Now building the android platform will use this new kernel to build the boot.img and recovery.img
NOTE: Differ building the android platform, if you want to update wifi module (wlan.ko) also

*** Build the wifi module to match the new kernel and copy it to appropriate platform directory

$ cd ~/work/aosp_donut/system/wlan/ti/sta_dk_4_0_4_32
$ export KERNEL_DIR=~/work/kernel/msm

$ make  <=> OR the next line
$ make KERNEL_DIR=~/work/kernel/msm ARCH=arm CROSS_COMPILE=arm-eabi-

$ cp ~/work/aosp_donut/system/wlan/ti/sta_dk_4_0_4_32/wlan.ko ~/work/aosp_donut/vendor/htc/dream-open/

NOTE: Looking at the comment in the wlan/ti/ … directory, there seesm to be another way
of getting this to autocompile but at this time I am not sure how that will work out

Now building the android platform will use this new wlan.ko to build the system.img (/system/lib/modules/wlan.ko)

*** Building the Android platform with new kernel and wifi module (already copied to the required locations)

$ cd ~/work/aosp_donut
$ source build/envsetup.sh
$ lunch   <=> remember to select appropriate target
$ make -j2

***N*** Burning the new images

$ export PATH=~/work/aosp_donut/out/host/linux-x86/bin:$PATH

Boot the device into fastboot mode by powering on the device by holding BACK button pressed.
You should see the droids on skateboards and also fastboot specified on the screen
Connect the usb cable between device and host, if not already done so

* On the host pc do (IF ONLY UPDATING boot.img)

$ fastboot devices  <=> This should list your device
$ fastboot erase boot
$ fastboot flash boot boot.img
$ fastboot reboot

* On the host pc do (IF burning everything i.e all imgs (boot,recovery,system,userdata)

$ fastboot devices
$ fastboot erase boot
$ fastboot erase userdata <=> This is just in case for future
$ fastboot erase cache <=> This is just in case for future
$ fastboot -p dream-open -w flashall
$ fastboot flash userdata userdata.img
$ fastboot reboot

* Now the phone should have rebooted into Android with the new kernel/system which we just burnt. Check it out by looking into Settings->About phone

$ adb devices  == This should list your device
$ adb shell    == Now you have root access to your device

***N*** Burning into G1 (using recovery image)

Copy the boot.img and system.img to root of sdcard in G1
Boot your G1 into recovery mode by powering on the G1 with HOME button pressed.
Getting into the console.

$ mount /sdcard
$ flash_image boot /sdcard/boot.img
$ flash_image system /sdcard/system.img

NOTE: This didn’t work. May be because I didn’t erase userdata and load new userdata ???. Have to check this later.

Repo querieis
————————-

** I don’t see a direct way of finding which branch is setup by repo init, other than by looking at .repo/manifest.xml file. Am I missing something, or is it how one looks.

** I see 2 different manifest files for donut. And am not able to find a simple way of telling which of the donut manifests is used by default and how to change it if required. Or am I interpreting something wrongly here.

** Also no direct info has to what release tag/branch corresponds to what at a high level (like difference between donut and donut_aosp_plus – also had to look at .repo/manifests.git/FETCH_HEAD to find the branches available)

** Also the google repo usage document assumes that the developer understands git/repo too much (i.e a novice will require to dig lot more). They could help by making the document bit more verbose and or more importantly adding some use cases.

Useful links
———————————–

http://source.android.com/download
http://source.android.com/documentation/building-for-dream
http://wiki.cyanogenmod.com/index.php/Building_from_source

SO MANY OTHER WEBSITES
kaushikdutta
http://ctso.me/2010/01/building-an-android-rom-part-1/

Experiment status till date
—————————————————-
>>Attempt 1: FAILED<< when trying with the Google android.git.kernel.org repository
a1.1 – Used master branch instead of donut
a1.2 – Did not use htc-adp1.sfx.gz (because unzip-files.sh seems to work around this)
a1.3 – Used the prebuilt kernel which is already part of the repository. And which in turn is automatically used to create boot.img
a1.4 – used flash_image boot boot.img and flash_image system system.img from cm’s recovery console
a1.5 – targetted aosp_dream_us-eng

>>Attempt 2: Basic kernel(compiled locally), wifi (locally compiled) and system apps running << Tried with donut branch this time.
a2.1 – Used donut branch

>>Attempt 3: Have to try donut_plus_aosp<<

>>Attempt 4: ToDo << Later have to try with cyanogen’s repository.
When trying to search for any steps to get aosp compiled for G1, I have come across few pages by ctso also (which is linked above).
As I have some issues with Cyanogenmod build, I want to stick with Google aosp code for now. So haven’t given Cyanogen’s repository a try yet.


December 6, 2009

The Bad and the Good, ARM Netbook will be WORSE wrt Open source

Filed under: General,OpenSource,technology — hanishkvc @ 5:14 pm
A debate has been going on in the Linux community wrt Intel, GMA500 (Poulsbo, PowerVR) and the pathetic support for the same. Some felt (or may feel) that the potential ARM based netbooks could solve their Problem, NOW truth couldn’t be more contrasting than this, so I posted a response on a related and good article requesting Intel to come to their senses on Linux Journal. I am publishing my response to “that article and the comments it generated“, here
**** My comment FROM Linux Journal site ****
On December 6th, 2009 hanishkvc (not verified) says:
Hi All,
To summarise things as it stand today wrt Netbooks/UMPC/MID market and TRUE Open source and Intel Vs ARM.
a) Processor
—————-
* Arm is Good at Power consumption but bit low on Performance
* Intel is good at performance but bit low on Power compared to Arm levels
But given the tradeoffs, it is to be expected. Intel Atom is slowly moving towards the Arm territory with respect to Power AND Arm is slowly moving towards the Intel territory wrt Performance wrt the Cortex AX cores relative to Mobile platforms like Netbooks.
BUT coming to FREE AVAILIBILITY of DOCUMENTATION of their Chips and its associated peripherals, Intel is slightly better than ARM Inc (Where is the up to date ARMARM document ARM Inc wrt latest Arm versions ???, SO TRUE OpenSource People PLEASE DONT ASSUME that Arm is better and one should switch from Intel to Arm (what a person mentioned above), you cann’t be more wrong, wrt what we want ultimately.
b) Core Chipset (All IO in General)
————————————
Keeping power consumption in mind, today Arm based SOCs are better integrated and flexible wrt all io modules be it ram, graphics, video, 3d, audio, expansion (serial,parallel buses), core support logics (Timers, IntCntr), storage cards AND finally additional integrated coprocessors.
Intel was stuck with bad companion chipset combinations for such low power requirement products like Netbook (i.e provided you want Netbook to be full day computing) so they had to come out with a low power companion chipset for Atom (Be it N series or Z series) AND the FIRST step in that direction from Intel is what US15 (Poulsbo) is and that in turn uses GMA500 (PowerVR). They have still bit more distance to travel to match ARM SOCs here, but atleast it is a start.
Hopefull either
* they will be able to get support for GMA500 added to Mainstream linux thro either a regularly updated binary driver or better a open source driver or best case being open up the documentation for it.
* OR replace PowerVR 3D+Video core with their own in the future generation of the Atom companion chipsets or the Integrated CPU+GPU soc (the current one is still PowerVR).
DONT FORGET THAT Majority of the ARM SOCs out their use the PowerVR 3D + Proprietry (Dependent on who makes the SOC) Video accelerators. So A ARM based Netbook will be AS BAD OR WORSE THAN the current Intel Netbook platform AS FAR AS TRUE OPEN SOURCE people are concerned.
OR LUCKY for all, Imagination Opens up the documentation or releases a Open source driver for 2D, 3D and Video logic of their logic.
**** END of my comment from Linux Journal site ****
Update: Rather I forgot one more important related point while posting at LJ, it is -> If we go further and look at free documentation for the Arm based SOC, then things will turn out to be much worse in general (there are exceptions – where certain SOC vendors have provided good documentation for the basic SOC part, but even they leave out the powerful features of their SOC from documenting freely) so we still have some distance to go before we can say Open source and ARM based Netbooks in the same breadth.

May 16, 2007

Short and simple commandline Bluetooth in any new Linux distros

Filed under: bluetooth,debian,linux,Nokia,OpenSource,technology — hanishkvc @ 7:22 pm

Yesterday I had to transfer some files/S60 Opensource programs to my Nokia 6630 mobile and so picked up my usb bluetooth dongle (after ages) and connected to my Linux PC to achieve the same. I had forgotten the things which I had done long time back to get it working (Also one of these days I have to find out where I had noted those steps down).

Either way I started by remembering that I have to try and use obex logic to put those files on the mobile (now come on remembering that isn’t that difficult;-). Soon I remembered most of the things to do through aptitude search/show bluetooth/bluetooth packages, dpkg -L <bluetooth related packages>, some trail_N_error and net searching (googling).

But to my horror what ever I do the connection wouldn’t establish has the bluetooth stack on the PC wasn’t pickup the PIN which I just configured on the PC. After some more rtfm and dpkg -L bluez-utils and cross verification on the bluez website I realised that the way the PIN to be used is specified to the bluetooth stack has changed on the PC and now instead of the pin_handler it uses a dbus based passkey handler. So I compiled the given passkey_agent.c and resolved it. And thus could achieve the file transfer without going into windows thou with some deficit of sleep 😉

So here are the commands one could use to work with bluetooth devices in a linux based pc =>

hciconfig
– Gives info about the bluetooth hci on your pc
– Ensure the device is up and running and has required scan modes
– hcitool dev should also give some of this info

hcitool inq and hcitool scan
– Gives info about or rather identifies nearby bluetooth devices

hcitool info <BTAddr>
– Get info about remote bluetooth device

l2ping <BTAddr>
– One way to see if we can communicate with a remote bluetooth device

sdptool browse <BTAddr> or sdptool records <BTAddr>
– Gives info about the services provided by a remote bluetooth device

obexftp –nopath –noconn –uuid none –bluetooth <BTAddr> –channel <OPUSHChann
elNo> –put <FileToPut>
– Allows one to send file without specifying the pin on the remote device side
– The OPush channel number for device is got from sdptool above

passkey-agent –default <Pin>
– Pin specified here is what the remote BT device should provide
or its user enter on that device when requested.

obexftp -b <BTAddr> -v -p <FileToPut>
– Allows one to put a file onto the specified BT device
– obexftp could also be used to get or list the files on the BT device
– also allows one to identify a nearby BT device by just giving -b option

obexpushd
– Allows one to recieve files sent from a bluetooth device.
– Depending on who started it, the recieved files will be stored in the corresponding home directory

Note: The old style pin_handler doesn’t work with latest bluez, you require a
dbus based passkey handler and there is one provided by default by bluez-utils
called passkey-agent
Hope this helps anyone who is trying to use bluetooth devices from the commandline on a new linux distro, as well as it would help me to remember for the future for my own use.

March 11, 2007

Finally Got my PS3, runs in india and runs linux but no mp2 and no timed poweroff. Also allow Indians to register on PSN

Filed under: gaming,General,India,life,OpenSource,PS3,Sony,technology — hanishkvc @ 5:45 pm

As I had thought sometime back i.e I might buy a PS3 when it comes out in India for both its gaming features as well as its Media center kind of capabilities (especially for my family, i.e easy to use) and more importantly for me its Computing capabilities.

Lucky me that I had to go to US for a short trip last month and in the process while coming back picked up a PS3 premium edition from atlanta (Just for Sony to know, I found 3 to 4 PS3s’ available at EBgames shop with the shop guy telling that its been there for some time and it is not moving, so can I look at getting around $3K or so wasn’t it what sony us president had told he would give people who find PS3 in stock for a long time). Thus I am saved from having to pay additional premium in India when it releases (when ever it is, no info from Sony about it still, this is bad) and also even thou sony doesn’t mention it IT DOES HAVE A UNIVERSAL POWER SUPPLY (ATLEAST the JAN2007 model which I brought), I took a chance (calculated) and it did run without requiring any stepdown transformers.

I will be able to experiment with its PSP interfacing capabilities, rather once I find time to upgrade my PSP to 3.x firmware obviously the non native way (i.e not direct upgrade because I love homebrew).

I tried installing Linux (the YDL version) and it does work. I have a problem with the mouse (rather a ps2 mouse through a USB to PS2 converter device) which I have to fix later. Otherwise it does run fine. Sometime later I would rather come out with a fully ramdisk (initrd) based small linux distro for it with some key utilities which I use. With it using kboot as its otheros bootloader, it shouldn’t be that difficult. Also as I already have native compiler of PowerPC running on PS3 I don’t have to mess with cross compilers also (not that I don’t like it or any such thing, just lazy these days).

If anyone is interested the kboot customisation for PS3 or rather the ps3 linux devkit full cd is available for kernel.org (as well from dl.qj.net, but qj.net had download issues, as it is not simple wget for qj.net links, and power was failing every time I was trying to download here, so making it hard for me to continue a partial download).

I tried Resistance and RR7, I liked both overall. However I did find that PS3 doesn’t support mp2 (not layer3 (i.e mp3) but layer2) decoding while a $50 (Rs2000) dvd player supports it. Also one more thing I wanted was a timed poweroff feature so that I could let it play some songs or so and after a given time it poweroff itself, I didn’t find this feature in PS3. It would be usefull if sony could add these features to PS3.

Also if sony could release direct dev tools for PS3 i.e not for Linux environment but for their native PS3 os, like MS then it would be even more fun. Either way I do appriciate Sony PS3 allowing linux to be installed i.e one of the reasons I bought PS3.

SONY ALSO Please do allow us indians to register on your playstation network with our indian address if you can. I legally brought a product from US on my own (also brought 2 game cds along with it, I think better than what most US and Japan customers have done on a average till now), but I will be using it in India, cann’t I register it with my Indian address (which you don’t allow for now in ur website).

November 29, 2006

Running Mupen64 in Fedora Core 6

Filed under: emulation,fedora,gaming,n64,nintendo,OpenSource,technology — hanishkvc @ 4:34 pm

Being a emulation/simulation fan, I recently moved my interest from nes and snes(zsnes and snes9x) to n64 (nintendo 64). On searching the net I found that the only relatively active open source project currently for n64 emulation is mupen64 released sometime last year.

I downloaded the source and binary versions. Initially trying it on FC6, I found issues with both the mupen64 which I compiled and later also with the binary version directly downloaded. After some breaking the head and some trial and error, this is what I had to do to get Mupen64 running on FC6 using a ATI 3D acceleration driver (i.e the ATI’s drivers and not the one from Xorg). Also what I noticed is that the Mupen64 works BETTER | PROPERLY once I use the ATI s driver instead of the XOrg s driver.

  1. **Fixing the SELinux issue with plugin**
      chcon -t textrel_shlib_t /pathto/mupen64-0.5/plugins/*so
  2. **Fixing AIGLX and Composite issue with ATI driver(Cas I use ATI drivers)**
    Current ATI accelerator drivers don’t support AIGLX/Composite along with DRI. So One is
    required to disable AIGLX and Composite features in X Server, if one wants to get OpenGL
    acceleration in Mupen64 graphics plugins. This is _essential_ if one wants good speed
    during emulation.
    #** Put the following into your /etc/X11/xorg.conf **
    Section "ServerFlags"
    Option "AIGLX" "off"
    EndSection
    Section "DRI"
    Group 0
    Mode 0666
    EndSection
    Section "Extensions"
    Option "Composite" "off"
    EndSection
  3. ** Make Mupen64 use ATI (Your 3D H/W based) libGL instead of MesaGL s **
    The ati h/w based libGL doesn’t install into /usr/lib/, but rather installs into /usr/lib/ati-fglrx.
    This creates problem because the MesaGL’s libGL is under /usr/lib and by default any
    program will pick this up instead of your h/w accelerated libGL. I THINK FC6 (livna guys)
    should fix this at the package level. However On trying to fix it by forcing mupen64 to use
    the proper libGL using /etc/ld.so.conf.d/prgname logic, it failed. I didn’t try breaking my
    head as it was already 3 or 4 am in the morning and I still had games to try. So I worked
    around this by creating a symbollic link to hw based libGL in the mupen64 directory and
    using LD_LIBRARY_PATH to force the use of proper libGL as shown below.

    • cd /pathto/mupen64
    • ln -s /usr/lib/ati-fglrx/libGL.so.1.2 libGL.so.1
    • export LD_LIBRARY_PATH=.
    • ./mupen64 (NOTE: Now you should be happily running mupen64 in FC6 with acceleration)

If you have taken care of the 3 issues mentioned above, Now you should be able to happily enjoy running Mupen64 on FC6 with 3D acceleration for graphics. Enjoy.

October 20, 2006

My NEXT PC might be a PS3

Filed under: General,life,OpenSource,PS3,technology — hanishkvc @ 4:06 pm

From sometime now I have been thinking of upgrading my computer (interms of CPU and GPU) and or buying a laptop. However because of various reasons I haven’t been able to do both.

Pricy PCs  (well in a way):

Well wrt CPUs now that Core 2 Duo is out the technology advance issue is resolved now however prices being bit high here in India, I am waiting out for the prices to drop. Now coming to GPUs with the availability of Unified Shader logic (rather more generically the flexible pipeline and flexible programmable nature) in the newly released and to be released GPUs, the technology advance I have been waiting for will be finally resolved, but then again the pricey nature of GPUs (rather more severe in case of GPUs when compared to CPUs in India) makes it hard to buy them.

Consoles to the rescue:

So assembling the PC hit by pricey nature of new CPUs and GPUs, the other alternative I can look at is buying consoles (I mean Game consoles).

The price issue becomes less of a issue as the vendors normally try to sell the console at a less cost or atleast less margin of profit from a h/w perspective and then try to make money on the games that are sold.

Coming to technology, Well at a technical level a game console is also a computing device with similar capabilities to a PC. However over the last few generations of the Game console the vendors have been aiming them as pure gaming devices and not letting the users utilize the full power hidden in these products. But things are changing.

XBox360 is a good console with 3 core PowerPC asic (with each core supporting 2 threads) and a unified shader based gpu from ati. It has support for storage (HDD), networking (ethernet and wlan) and external expansion(USB, Bluetooth) . And it has sufficient Memory (512MB). However the problem here is that Microsoft (i.e the developer/vendor) doesn’t want XBox to be used for anything other than Gaming in principle. There is some hope to circumvent it legally to some extent by using the XNA Express developer framework however this can only be at the application level and here to only interms of managed code.

PS3 from Sony on the other hand has the Cell processor (a PowerPC core + 8 Specialised Processing elements (mini cpus))(having architected embedded products which use multicore ASICs involving a General purpose CPU and or DSP and Specialised Processing elements from other Chip vendors, I am pretty happy and looking forward to all the possibilities in these specialized ASICs with seemingly limited resources)  and a GPU from NVidia (hoping against hope that it will have a surprise interms of unified/flexible shaders, even otherwise to some extent it is still ok) . It also has storage (HDD, Flash cards), networking (WLan (and ethernet???)), expansion (USB, Bluetooth, Flash card interfaces (SD/…)) and Memory (512MB). And TO TOP IT UP Sony is WILLING to let Users UTILIZE the Computing Power of their Console for what ever the User fancies. And inturn they will DO IT IN STYLE by USING /EXTENDING OPEN TECHNOLOGIES like Linux, OpenGL, OpenXYZ, GCC, open source applications (belonging to many/any domain).

So PS3 presents itself to be a good Gaming Console as well as a good General/Special (thanks to Cell) Computing device. Even thou it might be pricy in India at the begining (only sometime around 2007 mid, if I may fancy) still I would consider it a better pricy thing to buy rather than a pricy pc(which inturns I have to keep upgrading atleast every ~1.5 years if I want to play the latest and greatest game in its full glory).

One more reason for tilting towards a Game Console (PS3 in this case) being that Games will be specifically optimized to utilize the the full power of the console to give the best results. Also games will be available for the console for atleast 4-5 year period without requiring to upgrade(rather change) the console, which is not the case in PCs (i.e if you want to fully see what the game developer wants you to see).

Consoles which I may use for similar purpose but not happy fully:

Nintendo Wii: Again uses a simple PowerPC core  and a simple GPU. Other than only haveing simple CPU and GPU, it has a proprietry DVD format if I am not wrong. Again native availability of a flexible computing enviornment with option to add ones own modules/applications is questionable at this stage. One could always hack these to get these capabilities and there will surely be communities on the internet to help allow that, but then it is a different story altogether.

Last generation – PS2 and XBox: At this point in time the Computing power/capabilities of these products will fall below what a average user/developer might expect. Again among these PS2 would be the one I would pick if I have to because (a) it has official support to experiment with linux and (b) it is the non standard platform design (which is what I like and would love to experiment with) compared to Xbox (a PC at a basic level).

October 4, 2006

Open sourcing Code (which should be ethically open) equally or more important than Hiring few open source developers or supporting opensource

Filed under: life,OpenSource,technology — hanishkvc @ 1:15 pm

In Aug2006 there was a blog entry about Google , the god father of opensource ? in linuxjournal. Which tried to justify Google not opensourcing their code by saying that they hire people who work on opensource products and that people won’t understand googles code contributions so why should they and so on. This really made me sad that people think in such ways and make bad things appear good. So I commented there my thoughts on why its not good. I am adding it here as part of consolidation of my thought. Here follows my comments then:

Hi,

It is good to know that google is hiring open source developers so that they can concentrate on their open source work rather than worrying about how to earn a living in parallel to working on open source projects.

However if google is not open sourcing some of its code which it should have open sourced from a pure ethical point of view (but not doing it by hiding behind some short comings in the existing GPL or other open source licenses – feel/guessing that it may be mostly related to web based servicing …) Then it is a bad thing that google is doing and it can no way be justified to be ok just because they hire few open source developers or support few open source projects moneyterily.

If someone tells that they don’t think that its worth opensourcing their code, because people may not understand the code or may not have any use for the code then they are talking garbage here. If all the initial developers of opensource code had worked with the same mentality as above then the open source movement wouldn’t have been the great movement to be reckoned with that it is today.

Yes there might be people who are mentored slowly into open source projects, but at the same time you will find a lot of people silently contributing or using opensource code/project with out having any mentor to guide them because they have some circumstances which they feel is best resolved using open source projects and then learning the abc’s of the project on their own based on the code available to them and by experimenting with that code.

NOTES: As I don’t keep track of events on the opensource front actively I don’t know if google is guilty or not. But if any company (google or otherwise) as a attitude that what ever code they are working on which in turn is directly or indirectly built on open source projects, is not worth opensourcing just because they feel others may not understand it or may not have use for it, then this is NOT a GOOD TREND NOR ATTITUDE and NOR IS IT ETHICAL. And no one should praise such a company and justify that a better thing for such a company to do is (a) to hire few open source developers to let them work on their open source projects, or (b) contribute moneyterily to open source projects or (c) mobilize people to work on opensource projects. What I mean is even though (a),(b) and (c) above are in itself good things it can in NO way justify the stealing (if I may use such harsh word) of efforts of other opensource developers however small it might be. Because it goes against the fundamentals of the open source movement, which are essential to keep the opensource movement alive.

Binary blobs and linux or other Open source programs – greedy people – Bad Bad

Filed under: Blogroll,life,OpenSource,technology,Uncategorized — hanishkvc @ 12:48 pm

Some time back, Aug2006 to be precise, there was a blog in oreillynet.com about binary modules and Linux kernel. And many people inturn commenting about it. This was my thoughts around that then, which I had posted there. Now I am adding it here, so that my thoughts are consolidated here:

I find many people making the statement, that one uses the best thing for the job. Let me tell you that no one is telling that there aren’t good proprietrery software, if there is some proprietry software use it its your wish and right.

But then if you also agree that there is some good open source products that you use then it is there because it is open and it has allowed people to fix and enhance it to reach that level, which is possible because of its open source nature. If that open ness was not there, it wouldn’t have been possible to have all these great open source products. NOW if someone wants to be in this opensource market, then he should respect the wishes (i.e opensource licenses of these copyright owners) of these opensource developers as his work will not be of anyuse in this particular market (i.e that of the opensource products), which he wants to be in, if the underlying opensource product is not there in the first place.

Put differently as some great person had told centiuries ago humbly “I am a small person who appears like a gaint because I stand on the shoulders of many gaints”. If people who use the opensource market for their own profit realize this fundamental thing thats what would be great. And I would say if this statement of that great person is understood by everyone, then we wouldn’t have had to live in this purely money oriented world with all these patents (the way they are implemented in todays world, where almost anything is patentable irrespective of the lack of PATHBREAKing innovativeness in most of the ideas) and other stupidities.

Blog at WordPress.com.