install modem SmartFren AC682 di Android

on Sabtu, 29 Desember 2012



Android-x86 meMakai Modem Smartfren AC682...
Dialer Khusus Usb Modem Evdo Rev.A (Testing di Smartfren AC682)
Caranya :
DOWNLOAD DULU >>>www.4shared.com/archive/kr6dJWiB/evdo_Android.html

1)    booting Android >> Install Root Explorer v2.17.apk lalu masuk Root Explorer yg
       telah terinstall,  install  PPPwidget.apk
2)    cari di tab WIDGET, klik kanan tahan pindahkan ke HOME, lalu muncul PPPwidget
       need root permission> klik allow (kalo ada)...


3)    Configure..isikan apn, user, password>> smart. ISI dial string #777.. centang
       DISABLE USB MODEM SLEEP, Un-Check (hapus centang) Automatic Search
       Modem ...Centang Manual Port Modem, Isi ttyUSB0 (baca ttyUSB NOL).
       Centang SHOW Log..lalu keluar...
4)    Colokkan modem, TOUCH ICON (warna Hijau, tulisan PPP, gambar USB)
       klik  19d2:ffde ..Prepare Device lalu klik  19d2:ffdd ...lalu klik CONNECT...
       lama ditunggu tidak keluar CONNECT...tapi Disconnecting.... masuk
       Rootexplore.apk trus hapus seluruh isi folder LOG  di /sdcard/pppwidget/log
5)    CABUT MODEM, kemudian Colok kan lagi...TOUCH ICON (warna Hijau,
       tulisan PPP, gambar USB).. klik 19d2:ffde   ..Prepare Device..
       lalu klik 19d2:ffdd …..LALU Masuk Rootexplore.apk...
       copykan (OVERWRITE) GPRS dan GPRS-CHAT hasil  download
       ke /sdcard/pppwidget/ppp    dan keluar/Tutup Rootexplore.apk
6)    Klik CONNECT di Home dial PPPwidget... Tunggu.. sudah pasti keluar CONNECT!


sumber http://tokolinuxpalembang.blogspot.com/2012/11/install-modem-smartfren-ac682-di-android.html

Membuat Repository Lokal BlankOn Linux

on Kamis, 27 Desember 2012

Membuat Repository Lokal BlankOn Linux




Tidak banyak, tepatnya : sangat sedikit, untuk saat ini, pengguna komputer di Indonesia yang telah memiliki koneksi internet yang "pantas". Padahal, metode instalasi perangkat lunak di Linux (yang dalam hal ini BlankOn) adalah secara online : mengunduh lalu memasang.

Bagi kita yang memiliki koneksi pas-pasan, hal tersebut tentu cukup tidak praktis. Terlebih, bila kita memasang ulang sistem Linux, maka kita harus melakukan proses instalasi online yang ukuran paket software yang harus kita unduh bisa mencapai lebih dari 1 GB. Cara yang lebih praktis adalah dengan membuat repository lokal yang berisi paket-paket yang kita butuhkan sehingga bila nanti kita menginstalasi ulang sistem Linux BlankOn kita, kita tidak perlu melalui proses mengunduh lagi (yang bisa memakan waktu hingga beberapa jam).

Dan cara termudah untuk membuat repository lokal adalah dengan menggunakan aptoncd. Aptoncd adalah tool untuk membackup cache dpkg dari sistem kita dan menjadikannya kumpulan deb yang bisa kita gunakan sebagai repository lokal. Langkah-langahnya adalah sebagai berikut :
  1. Pasang semua paket yang kita butuhkan.
  2. Jangan lupa, pasang dua paket berikut ini :
    aptoncd dpkg-dev
  3. Setelah proses pemasangan semua paket software selesai, buka aptoncd.
  4. Akan ada kumpulan paket yang kita unduh, maka pilih Burn. Akan muncul dialog tentang nama iso dan media yang akan dipakai. Agar praktis menjadi satu iso, pilih media DVD.
  5. Setelah proses burn selesai, akan ada opsi apakah akan membakar iso ke media optic (DVD). Pilih saja tidak.
  6. Di home akan tercipta satu file iso, ekstraksi file iso tersebut dan rename folder Packages menjadi folder yang sesuai dengan distribusi Linux kita (misal blankon, rote, precise, dlsb atau terserah kita).
  7. Generate database kumpulan dpkg tersebut dengan perintah dpkg-dev. Caranya, buka terminal dan masuklah ke tempat dimana folder yang berisi paket deb tadi ada (misal home). Perintahnya adalah sbb (dengan misal folder diberi nama rote) :
    sudo dpkg-scanpackages rote /dev/null | gzip -9c > rote/Packages.gz
  8. Setelah selesai, kita bisa memindahkan folder tersebut ke dalam harddisk penyimpanan data dan bisa kita gunakan sebagai repository lokal. Untuk menggunakannya, kita hanya perlu memasukkan alamat repository lokal tersebut ke dalam sources.list dengan format sbb :
    deb file:/lokasi/lokasi nama_folder/
    Contoh, folder yang berisi deb tersebut diberi nama rote diletakkan di lokasi berikut ini :/media/d/softwares/linux/repo/rote
    maka alamat repository lokal tersebut di sources.list adalah sbb :
    deb file:/media/d/softwares/linux/repo rote/
    Terakhir, kita update cache apt dengan sudo apt-get update, dan kita sekarang bisa memasang kumpulan software favorit kita secara offline.
Selamat mencoba :) 
 
artikel asli http://curhat-blankon.blogspot.com/2012/10/membuat-repository-lokal-blankon-linux.html

install play onlinux ubuntu 12.04

wget -q "http://deb.playonlinux.com/public.gpg" -O- | sudo apt-key add - sudo wget http://deb.playonlinux.com/playonlinux_oneiric.list -O /etc/apt/sources.list.d/playonlinux.list sudo apt-get update sudo apt-get install playonlinux


sudo apt-key adv --keyserver keyserver.ubuntu.com --recv-keys E0F72778C4676186

sudo wget http://deb.playonlinux.com/playonlinux_precise.list -O /etc/apt/sources.list.d/playonlinux.list
sudo apt-get update
sudo apt-get install playonlinux

#Lagu Bagus Tobikata Wo Wasureta Chiisana Tori

on Minggu, 16 Desember 2012

Sora o kakeru hikouki
 
Mado kara miorosu kumo wa yuki no you
Anata no sumu basho e to mukatte
Kono kokoro wa yurete imasu

Kisetsu mo jikan mo subete kawatte iku
Nee mite yo hora
ORION ga chiheisen ni kagayaku

Tobikata o wasureta tori no you ni
Boku wa nanika wo miushinatte
Kizutsuita sono basho kara umaredeta
Itai hodo no shiawase wo mitsuketa

Surinuketeku shiawase hodo
Hakanai koto to wa shirazu ni ita
Surechigai ya ikidoori ni
Sotto hitomi wo sorashite ita

Kisetsu mo jikan mo oikakete miyou
Nee mite yo hora
Taiyou ga noboru awai sora wo

Tobikata o wasureta tori no you ni
Itsuka nanika wo mitsuketa nara
Ki ga tsuite sono basho kara umaredeta
Itai hodo no shiawase ni  kitto

Tobikata o wasureta tori no you ni
Boku wa nanika wo miushinatte
Kizutsuita sono basho kara umaredeta
Itai hodo no shiawase ni ima  kizuite
 
 
Translate

A Small Bird That Forgot How To Fly

The airplane that is flying in the air
The cloud looks like white snow from the window
Fly to the place where you live
My heart keeps shaking

Seasons and time, everything is changing
You look, Orion shines in the horizon

Just like a littel bird that forgot how to fly
I seem to missed somthing
Borned from the place which is hurt
I found near-pain happiness

I didn't know at that time, There is nothing more drifter than lossing happiness
Don't want to face directly, each others' distance and anger

Seasons and time, I want to chase and see
You look, the light colored sky where sun rises

Just like a littel bird that forgot how to fly
If one day I should find something
Lives from self-aware place
The near-pain happiness, That must be it

Just like a littel bird that forgot how to fly
I seem to missed somthing
Borned from the place which is hurt
The near-pain happiness, Now I found, That's it 

How To: Install Wikipedia For Offline Access

on Jumat, 07 Desember 2012

August 31, 2006
In the old days (say, around 1990) a must-have application when buying a computer was an encyclopedia on a CD-ROM. Hello, Grolier’s and Encarta! No more would you need a shelf full of books to look up interesting facts! When I bought an iBook, it came with a copy of World Book, which I thought quite an entertaining addition.
These days, such an addition is no longer the norm, thanks to the Internet. An incredible amount of information can be gleaned online with a quick search. However, a project started a few years ago has quickly risen to become a great resource for user-provided information on a wide variety of topics. I speak, of course, of Wikipedia. While initially just a quick repository for user feedback, it’s quickly become a resource worthy of comparison to more established sources, such as The Encyclopedia Britannica, even if its veracity may be in question.
I have a laptop, but don’t always have an Internet connection, but wondered, why can’t I have an offline copy of Wikipedia? As it turns out, I can. Now, if I’m on the road and want to look up something quickly, I don’t even have to find a hotspot — I can just turn on my laptop, pull up a browser, and find the answer. This article shows you how I did it.
Overview

Wikipedia runs on the open source software MediaWiki. This in turn runs on top of MySQL and PHP, as well as possibly Linux and Apache. My laptop runs Windows XP Professional SP2 Tablet PC Edition, so running Linux and Apache just wasn’t going to happen. Fortunately, there is a WAMP project (Windows – Apache – MySQL – PHP), which did all the hard work of that installation for me. So, all I’d have to do is:
  • Install WAMP.
  • Install MediaWiki.
  • Download and install a pages dump of Wikipedia.
These instructions should in theory work for any Windows XP SP2 machine. However, your results may vary. I take no responsibility if you try this yourself! Some anticipated caveats:
  • You need Administrator privileges. You’re installing software, as well as creating services, so you need the privileges.
  • You need disk space. The full English Wikipedia will take a over 10 gigabytes when uncompressed into the database.
  • You need NTFS. Because of this, the database files themselves may grow to larger than 2 GB. If you’re using FAT32, you’re out of luck.
  • You’re installing a new service. By default, the server installs without remote access, and hopefully, you leave your firewall in place. However, you are still installing new services on your machine, which means they have the potential for exploitation.
  • No pictures included. These instructions do not cover the images in Wikipedia.
That said, let’s get on with the show!
Install WAMP.

wamp_setup.jpg
Go to the Wampserver site and download the latest WAMP distribution (in my case, 1.6.4). Double-click the executable to run, and the defaults will pretty much be what you want. (E.g., install to C:wamp, create a Start Menu group, do not auto-start, set DocumentRoot to www, and Launch immediately.)
A Windows Security Alert will probably pop up and ask if you want to keep blocking Apache HTTP Server. You want to select “Keep Blocking” for this question.
wamp_security.jpg
Now, in your systray on the lower right side you should see a little dashboard icon, with a lock on it. It should be white, and when you mouse over it, it should say “WAMP5 – All services running – server Offline”. (When they say “offline” here, they actually mean that it’s restricting access to localhost — it’s actually online, technically.
To verify that it’s working, open up a web browser, and point it at http://127.0.0.1/. If the installation was successful, you should see a page that looks like the following:
wamp_success.jpg
That’s it for WAMP!
Install MediaWiki.

First, we’ll set up a MySQL user for Wiki. To do so, make sure WAMP is running. (If not, go to Start->Programs->WampServer->Start Wampserver.) Then, go to phpMyAdmin. Click on “Privileges”, then “Add a new User”. Use the following values:
  • User name: wikiuser
  • Host: Select “Local” from the dropdown
  • Password: Select “Use text field” from the dropdown, and pick a password of your choice
  • Generate Password: Click the “Generate” button
  • Global privileges: Leave all unchecked
Scroll to the bottom and click “Go”, and it should successfully create a user. On this confirmation page, you should have a screen to edit the user if you scroll down. Do so, to the section marked “Database-specific privileges”. Set the dropdown to “Use text field”, and enter “wikidb”. Click “Go”.
mysql_db_privileges.jpg
You should be presented with a new page for Database-specific privileges. Click the “Check All” link to check all the boxes, and click “Go”.
mysql_db_privileges_edit.jpg
Download the latest stable release of MediaWiki. At the time of this writing, that was version 1.7.1. It’s a .tar.gz file, so you’ll need a program to expand it — I recommend the shareware program WinRAR. When you unpack this, you’ll create a folder named mediawiki-1.7.1. Rename this to wikipedia, and move it to c:wampwww.
If you now visit http://127.0.0.1/wikipedia/, you should get a splash page saying to “setup the wiki” first. Follow that link, and you should get a “Site config” page. I used these values for this form:
  • Wiki name: Wikipedia Offline
  • (Admin) Password: custom password
  • Database host: localhost
  • Database name: wikidb
  • DB username: wikiuser
  • DB password: same password used when creating MySQL user
The other defaults were fine. Once done, I went in Windows Explorer to C:wampwwwwikipediaconfig, and moved the file LocalSettings.php up one directory to C:wampwwwwikipedia.
Another check of http://127.0.0.1/wikipedia/ should state that “MediaWiki has been successfully installed.”
Download and install a pages dump of Wikipedia.

You can download a copy of the English Wikipedia pages from http://download.wikimedia.org/enwiki/latest/. However, you should check this page, for the entry for “enwiki” first, to make sure the dump completed successfully. The file you will want is named enwiki-latest-pages-articles.xml.bz2. This contains all the article pages, but none of the revisions or history. You just want the articles, right? As of this writing, that file is around 1.5 GB, compressed.
If you don’t already, you should make sure you have Java installed. If you don’t, you can get it from http://java.sun.com/j2se/1.5.0/download.jsp. I usually just open a command window and type java and hit enter, and see if it just hangs. If it does, it’s probably installed, and I hit cntrl-C to cancel.
You’ll also need MWDumper. Download mwdumper.jar from http://download.wikimedia.org/tools/. Put this file and the wiki dump file in the same directory, say, c:tmp.
You’ll need to edit MySQL’s config file to increase the max_allowed_packet size. If you don’t, the import will most likely choke around the 49,000 article mark. This is quite annoying, because it kills the rest of the import. While you’re add it, you might as well change the innodb_log_file_size, which should modestly increase the import speed. To do so, go to c:wampmysql, right-click on my.ini, and select Open. This will open up the ini file in a text editor. Find the line innodb_log_file_size, and set this to 512M (was 10M in mine). Scroll to the bottom, and add the following line:
set-variable=max_allowed_packet=32M
Remember that little dashboard with the lock in your systray? Left-click on it, and a menu should pop up. Select MySQL->Stop Service. Wait a few seconds, then left-click on it again, and select MySQL->Start / Resume Service.
Before you import, you’re going to need to delete data in MySQL from the default installation. Otherwise, you’ll get errors about a dupe right at the start, and then none of the rows will import. Left-click on the dashboard with a lock in your systray, go to “MySQL”, and select “MySQL console”. You’ll be asked for a password, which by default is blank, so just hit enter. Enter in the following commands into the console:
use wikidb;
delete from page;
delete from revision;
delete from text;
quit
mysql_delete_data.jpg
This will delete all pages in the wiki.
Open a command window by going to Start->Run, and typing in cmd. Type c: to change to the c: drive, and then cd c:tmp to change to the directory where you put mwdumper.jar and the wiki dump file. You’re ready to do the import, but beware — this will take a long time. It’s best to start the process, then leave for a few hours. When you’re ready, type the following:
java -jar mwdumper.jar --format=sql:1.5 enwiki-latest-pages-articles.xml.bz2 | c:wampmysqlbinmysql -u wikiuser -p wikidb
import_starting.jpg
This will begin the import process, and as noted, this will take a long time. There are over three million pages to process, so don’t expect it to finish right away. On a reasonably fast single processor machine (*not* my laptop), it took me over 24 hours.
Usage

Using Wikipedia Offline is pretty straightforward. If you haven’t already, start WAMP. (If you see the dashboard with a lock icon in your systray, and it’s white, then it’s running. If not, go to Start->Programs->WampServer->Start Wampserver.) Then, just fire up a web browser and browse to http://127.0.0.1/wikipedia/. If all goes well, it should be accessible just like Wikipedia, searches and all.
wikipedia_up.jpg
Anticipated Questions

  • Why do this?
    I’m not always connected to the Internet, and think Wikipedia is a great resource. Now I can take it wherever I want. I suppose if I were paranoid about Wikipedia tracking my searches, then I could do this and do all the searches I wanted offline. Doing it this way also seemed like a fun tech project.
  • Is this legal?
    Sure! Wikipedia offers all of their data for use by interested parties. All of the software involved is open source, except for Windows.
  • Where are the pictures?
    You can download a dump of the English Wikipedia images from here. Wikipedia doesn’t package these with the dump for two reasons: 1) the images might be copyrighted, so they don’t want to distribute them; and 2) the dump file would be huge. As of this writing, the dump file is about 75 GB, which was larger than the hard drive on my laptop.
  • Isn’t it overkill to install full MediaWiki?
    Yes, but it’s not nearly as much effort as you might think. Plus, with WAMP, I can experiment with other types of LAMP-based software. You can always build static pages if you’d prefer something a little more lightweight.
  • Won’t the data fall out of date?
    Yes, but I’m doing this more to just have a quick reference, rather than something that’s kept constantly up to date. In that sense, it’s similar to those encyclopedia CD-ROMs! Besides, the way the dumps are handled, you’re guaranteed to be slightly out of date. If you really need to be that current, you should probably be going online.
  • How can updates be done?
    I’m presuming I can just go in mysql and delete from the ‘page’, ‘revision’, and ‘text’ tables; download a new dump; and re-import using mwdumper.jar. I haven’t actually tried this, though.
  • Can I use these instructions to run a wiki web site?
    There are a few problems with this. First, the WAMP folk note that “WAMP5 is not meant to be a production server.” Also, running a web site takes a fair bit of security knowledge to prevent hacking, so you’ll get yourself in trouble if you just use it to publish on the ‘net. Finally, you can’t republish the contents of Wikipedia as your own site. So, technically, you could use the first few parts to set up wiki, but it’s not a good idea. You’re better off getting a proper web host that has a one-click install of wiki, such as DreamHost.
  • How do I uninstall?
    If you want to trash the whole thing, go to “Add or Remove Programs” in the Control Panel, and select WAMP5. Remove it, then be sure to delete C:wamp as well.

sumber http://blog.onetechnical.com/2006/08/31/how-to-install-wikipedia-for-offline-access/

Wikipedia Offline

on Selasa, 04 Desember 2012

Most of what my work online either involves checking mail or browsing forums for getting answers or reading Wikipedia for getting information or social networking. With LAN cuts introduced in the IITs, it is difficult for a student to access information after 12:10 unless they breakout somehow. In an earlier post, I had explained with references to my code, on how to download parts of Wikipedia, I thought it would be helpful to download the whole of Wikipedia on to your computer. In this post I will show you how Wikipedia / stack-overflow / gmail can be download for offline use.
Wikipedia
Requirements:
  • LAMP (Linux, Apache, MySQL, PHP)
  • Around 30 GB of space in primary partition 30 GB of space for storage. In my case the root partition
  • 7 GB of free Internet download
  • 3 days of free time
Wikipedia dumps can be downloaded from the Wikipedia site in XML format compressed in .7zip. This is around 6 GBs when compressed and expands to around 25GB of XML pages. It doesn’t include any images. This page shows how one can extract text articles from articles and construct corpuses from the same. Apart from this, a static HTML dump can also be downloaded from Wikipedia page (wikipedia-en-html.tar.7z) and this version has images in it. The compressed version is at 15 GB and it expands to over 200 GB because of all the images.
The Static HTML dump can simply be extracted to get all the HTML files and the required HTML file can be opened to view the required content. In case you download the XML dump, there is more – you have to extract the articles and create your customized offline Wikipedia.with the following steps.
  1. Download the latest mediawiki and install it on your Linux/Windows machine using LAMP/WAMP/XAMPP. Mediawiki is the software that renders Wikipedia articles using the data stored in MySQL.
  2. Mediawiki needs a few extensions which have been installed in Wikipedia.Once we have mediawiki installed say /var/www/wiki/, download each of them and install by extracting these extensions in the /var/www/wiki/extensions directory.
    The following extensions have to be installed – CategoryTree, CharInsert, Cite, ImageMap, InputBox, ParserFunctions (very important), Poem, randomSelection, SyntaxHighlight-GeSHi, timeline, wikihero which can all be found in the Mediawiki extensions download page by following the instructions. In addition you can install any template to make your wiki look like whatever you want. Now your own Wiki is ready for you to use, you can add your own articles but what we want now is to copy the original Wikipedia articles to our Wiki.
  3. It is easy to import all the data once and then construct an index for the data in MySQL than to update the index each time an article is added. Open MySQL and your database, the tables that are used in the import are text, page and revs. You can delete all the indexes on that page and create it again in the 5th step to speed up the process.
  4. Now that we have our XML database, we need to import it into the MySQL database. You can find the instructions here. In short, a summary of the instructions found on that page, the ONLY WAY you can get Wikipedia really fast on your computer is to use mwdumpertool to import into the database. The inbuilt tool in mediawiki won’t work fast and may run for several days. The following command can be used to import the dump into the database within an hour.
    java -jar mwdumper.jar --format=sql:1.5  | mysql -u  -p 
  5.  Recreate the indexes on the tables ‘page’, ‘revs’ and ‘text’ and you are done.
You can comment if you want to try the same or if you run into any problems while trying.
Stack-overflow
Requirements
  • LAMP (Linux, Apache, MySQL, PHP)
  • Around 15 GB of space in the primary partition and 15 GB of storage. In my case the root partition
  • 4 GB of free Internet download
media10.simplex.tv/content/xtendx/stu/stackoverflow has several stackoverflow zip files available for direct download. Alternatively, stack-overflow dumps can be downloaded using a torrent. A torrent download can be converted into an FTP download using http://www.torrific.com/home/. Once you have the dumps you can unpack them to get huge XML files for several stack sites. Stack-Overflow is one of the stack sites, the 7zip file is broken into 4 parts and have to be combined using a command (cat a.xml b.xml c.xml d.xml > full.xml) Once combined and extracted, we can see 6 xml files for each site (badges, comments, postHistory, posts, users, votes, ) Among these, comments, posts and votes may seem useful for offline usage of the forum. A main post may consist of several reply posts and each such post may have follow-up comments. Votes are used to rate an answer and they can be used as signals while you browse through questions. Follow the following steps to import the data into the database and use the UI to browse posts offline.
  • Download Stack sites
  • Create a database StackOverflow with the schema using the description here. (comments, posts and votes tables are enough)
  • Use the code to import the data to the database. (Suitably modify the variables serveraddress, port, username, password, databasename, rowspercommit, filePath and site in the code)
  • Run the code on Stack Mathematics to import the mathematics site. For bigger sites, it may take much more time and a lot of optimizations are needed along with a lot of disk space in the primary partition where the MySQL stores its databases.
  • Use the UI php files to view a post given the post number along with the comments and replies.
  • TODO: Additionally we can add a search engine that searches the table ‘posts’ for queries and returns post numbers which match the same.
Gmail offline
Requirements:
  • Windows / Mac prefered
  • Firefox prefered
  • 20 minutes for setup
  • 1 hour for download
Gmail allows offline usage of mails, chats, calendar data and contacts. You can follow the following simple series of steps to get gmail on your computer.
  • Install Google gears for firefox
    • You can install google gears from the site http://gears.google.com
    • If you are on Linux, you can install gears package. [sudo apt-get install xul-ext-gears]
    • Note: Gears works well in Windows, may fail on Linux
  • Login to gmail
  • Create new label “offline-download”
  • Create a filter {[subject contains: "Chat with"] or [from: ] -> add label “offline-download” to selectively download your conversations.
  • Enable offline Gmail in settings, and allow download “offline-downloa” for 5 years. You can select the period of time as well.
  • Start, it will end in around an hour and you will have your mails on your computer in an hour.
Offline gmail creates a database called [emailID]@gmail.com#database in your computer. The gears site gives you the location. You can find some information about offline GMail here.
If you want a custom interface for your mails / chats etc, you can create one which queries the SQLITE database mentioned above to present the content however you want. The software diarymaker can be used to read your chat data with plots of frequencies with time and rank your friends based on the interactivity. It works on Linux and uses the Qt platform. I will add a post on it soon.
Feel free to comment on any issue, if you have an idea for downloading any other kind of data on to your computer for offline usage, please let us know with a comment.
Update:
media10.simplex.tv/content/xtendx/stu/stackoverflow Now you can download stackoverflow directly. (Courtesy: Sagar Borkar)
 
sumber http://kashthealien.wordpress.com/2011/08/06/wikipedia-offline/
 
© Geazzy Corner All Rights Reserved