Poetry of Programming

Its about Ruby on Rails – Kiran Soumya

By

Installing RVM & Git through Proxy

RVM relies on GIT.

So set proxy for GIT first.

*  Set the http_proxy environment
*  Set a proxy command to bypass the connection:
gcc -o connect connect.c
mv connect ~/bin
echo “/home/kiran/bin/connect -H proxy.company.com:6030 $@” >> ~/bin/proxy
chmod +x ~/bin/proxy

echo “export GIT_PROXY_COMMAND=proxy” >> .bashrc

Now try git clone. If it doesnt work, try out the following the command.

export http_proxy=http://<username>:<password>@<proxy_ip>:<proxy_port>

This line below also works like a charm for GIT,

git config --global http.proxy proxy_addr:proxy_port

Once the GIT is configured, for RVM, you need to do one more change for curl,
Set the proxy inside your ~/.curlrc

proxy = proxy.company.com:proxy_port

and now you can install rvm with no issues.

For rvm install thru proxy:

rvm install X --proxy proxy.company.com:proxy_port

If two developers are under the same user-group, we can even clone/copy the .rvm folder within two users without explicit installations.

Some more references:
http://blog.iwkse.homeunix.org/index.php?/archives/9-Git-Basic-setup.html
http://beginrescueend.com/
http://zipizap.wordpress.com/2010/11/02/cloning-rvm-to-other-user-you-can-just-copy-the-rvm-directory/ [This worked for me as well]

By

Driver Error: Svn Merge

The ‘svn merge’ command compares two trees, generates a patch, then
applies that patch to a working copy. Yes, you have complete freedom
to compare any two trees, and thereby generate any patch you want. But
that does *not* mean that ‘svn merge’ always will do what you want.
It’s *your* responsbility to make sure that the patch being produced
makes sense, and cleanly applies to your working copy.      
Skipped ‘src’
Skipped ‘src’
Skipped ‘src\au’
Skipped ‘src\au\com’
Skipped ‘src\au\com\forward’
Skipped ‘src\au\com\forward\codeSections’
A src\au\com\forward\codeSections\DesignNotes.txt
A src\au\com\forward\codeSections\CodeSections.java
Skipped ‘src\au\com\forward\codeSections\testFiles’
A src\au\com\forward\codeSections\testFiles\testin.cs
Skipped ‘docs’
Skipped ‘docs’
A docs\htmldoc.exe 

See those skipped messages? That indicates driver error. The merge
command is trying to add and remove certain directories because they’re
not related to each other at all. Please read this section of chapter
4, regarding ancestry:

http://svnbook.red-bean.com/en/1.1/ch04s03.html#svn-ch-4-sect-3.2.4

Then after reverting, try the merge again with the –ignore-ancestry
command.

So, It is like this what I have implemented,

> Taken a latest production copy as my_working_copy

> Merged the dev branch with production branch under  my_working_copy

svn merge –ignore-ancestry prod_branch_url dev_branch_url my_working_copy/

> And this is how we avoid the driver error.

> Check for conflicts under my_working_copy

Fix the conflicts always in favour of Clients Requirements.

Else if no conflicts, check in the merge to production

Finally, Say The END to the project !!!

Next is What ?  [ Samsung Adv :) ]

By

Why did the Tower of Babel Fail?

Now the whole earth used only one language, with few words. On the occasion of a migration from the east, men discovered a plain in the land of Shinar, and settled there. Then they said to one another, “Come, let us make bricks, burning them well.” So they used bricks for stone, and bitumen for mortar. Then they said, “Come, let us build ourselves a city with a tower whose top shall reach the heavens (thus making a name for ourselves), so that we may not be scattered all over the earth.” Then the Lord came down to look at the city and tower which human beings had built. The Lord said, “They are just one people, and they all have the same language. If this is what they can do as a beginning, then nothing that they resolve to do will be impossible for them. Come, let us go down, and there make such a babble of their language that they will not understand one another’s speech.” Thus the Lord dispersed them from there all over the earth, so that they had to stop building the city.

The Tower of Babel project failed because of lack of communication and of its consequent, organization.

“Schedule disaster, functional misfit, and system bugs all arise because the left hand doesn’t know what the right hand is doing.” Teams drift apart in assumptions.
Teams should communicate with one another in as many ways as possible: informally, by regular project meetings with technical briefings, and via a shared formal project workbook. [Or by electronic mail.]

By

What’s Web 2.0?

Starting from the static page, Web 1.0 has been turned to dynamic contents (like Google, yahoo etc), where users can interact with web pages, by providing instructions and contents through graphical interfaces. In short web 1.0 dealt with human-machine communication through web, accessible through wide range of operating system platforms, which was been possible by using http and html. Besides human to machine communication, the concept of web 2.0 concentrates on the following issues, that I have learned so far

  1. Machine to machine communication. You publish your content in one site, which has been automatically shared by thousands of web pages among other servers. This becomes possible through RSS. As a user, you can get updates of a web site contents, through using a RSS feeder.

  2. Human to human communication. For example point to point network, social community network etc, where pictures, emotions, biography, music etc personal resources can be shared with friends and friends community can be grown via friends of friends.

  3. Rich user experience. Using AJAX (Asynchronous JavaScript and XML), it becomes possible to send web requests from the client side to server side, through XHTTPRequest method call from browser end, which enables an user NOT to wait to work, until the reply of this web request returns back from the web server. One simplest example can be given as, in Gmail, we can attach as much as files and after then we can begin typing the mail. Lets say this type includes 6 minutes. By this time, Gmail starts sending the attachments to the server, lets say it takes 5 minutes. When my type finishes, the total process takes only 6 minutes, while not using AJAX it would take simply 11 (6+5) minutes. Web 2.0 sites provide, although NOT exactly, but a sort of much similar rich user experience like desktop application. Well one important thing to note, Ajax is not actually new technology, but uses old and very well known technology (JavaScript, xml etc) in new approach.

  4. Personal contribution over web content. Web 2.0 encourages personal contributions of the users over the web content. Blog is such a thing, which is called “Online diary”, where a person can enter his words, in daily basis. Each entry is specifically mentioned and maintained regarding it’s publish date. Several blogs in online has some common features, like archives, track back etc. Blogs has become popular, than the web 1.0 style “Persona web site”, as blog includes more personal touch. Another popular web 2.0 concept is wiki, which is called as “Online Encyclopedia”, where people can edit or contribute their knowledge there. Book marking is another web 2.0 features, where URL book marks can be saved and shared with others. Basically blog, wiki, url booking marking are the things are conceptual than latest technology and considers social aspect to share personal contents. These type of ideas encourages “tagging”, where any item can be tagged with one or more categories(in some cases these tags are being used in RSS feed).

By

How to stop search engines from indexing your pages.

Got something you want to put online, but you don’t really want it showing up in search engine results? Here are two quick and easy solutions.
Use a specific meta tag

For each page you don’t want to appear in search engine results, have only one tag. Not a description, not some keywords, just a single tag for robots.

meta name=”robots” content=”noindex,nofollow,noarchive” />

Pop that into the of each page, and you’re telling search engines not to index the page, not to follow any links from the page, and not to archive the page.
Create a robots.txt file

If your pages are all in a separate directory, you can also block search engines by using a robots.txt file.

Create a text file and, in it, disallow all the directories you want protected:

User-agent: *

Disallow: /nameofdirectory

Disallow: /anothernameofdirectory

Do it for all the directories you want, then save the file as robots.txt, and upload it to your main directory. The search engine robots will hit the robots.txt, find out which directories you don’t want them sniffing in, and skip them.

So there you go. Two little things that can save you a world of trouble.

However, these aren’t completely effective solutions. If you really want to block search engines from accessing your pages, you can either password-protect your pages, or keep them offline.

The choice is yours. Have fun!

By

Don’t blink your eye! You may not see wat you saw before!

Everyone attracts to the webpage that looks good. Are you aware that the page is dead after it is completely opened? And we need to refresh the page now and then for the update data.

No refreshes to the page anymore! Learn Ajax! And make your webpage alive!