Name: Alok, aka "Alok Sah"
Web Site: http://aloksah.org
Posts by Alok Sah:
Recently working on MEAN (MONGODB, EXPRESSJS, ANGULAR, NODEJS) Stack environment. I enjoy to wotked on application how its eco friendly with web. After finishing my app I want to run on server mongodb and angular was fine to me but question arise how to run nodejs in background as I am using debian10.
So I start with a system service for a Nodejs application to run as long as system is up. In linux system “Systemd” is a service manager which start, stop restart programs.
Create a file nodejsapp.service with the following content on /usr/lib/systemd/system.
[Unit] Description=Node.js ContactList Http Server[Service] User=Alok Sah Group=Alok Group Restart=always KillSignal=SIGQUIT [Service] WorkingDirectory=/root/contactlist/ Restart=on-failure ExecStart=/usr/bin/node /root/contactlist/app.js [Install] WantedBy=multi-user.target
assuming my nodejs file is in /root/contactlist/app.js.
Now using systemctl to control our app with the following commands
//whenever any service file is changes use daemon-reload sudo systemctl daemon-reload // enabling this service at machines start sudo systemctl enable nodejsapp sudo systemctl disable nodejsapp // to start the app.js service sudo systemctl start nodejsapp sudo systemctl stop nodejsapp sudo systemctl restart nodejsapp //check the status of service active sudo systemctl status nodejsapp
Once the Nodejs service is started successfully with no configuration errors, launch the web browser and check your application its running without npm start :).
Storing large object files into database is much interesting then saving path of files.
Large object data like images, document can be save into database by two simple method:
1> Save images in base encoded form that i had explained in my earlier article PHP Encoding with base64
2> Use BYTEA Data type in Postgresql.
“The BYTEA Data type allows storage of binary strings” you may say as RAW Bytes.
When you SELECT a Bytea type, PostgreSQL returns octal byte values prefixed with ‘\’ (e.g. \032). Users are supposed to convert back to binary format manually.
To Start with create column doc_image having data type “Bytea” in a table.
Now when you use file upload on your PHP projects, you will get file data in file super-global variable $_FILES
and you first have to move file to your server like i have move files to upload folder
move_uploaded_file($_FILES["file"]["tmp_name"], "/upload/".$_FILES["file"]["name"]); // get data of image from upload folder $dataString = file_get_contents("/upload/".$_FILES["file"]["name"]); $raw_data = pg_escape_bytea($dataString); //Store this string into your databse.
Use “pg_escape_bytea” function, it returns escaped string for insertion into a bytea field., and store this raw string ($raw_data) to database.
Now to retrieve that file data use “pg_unescape_bytea” function, It returns the unescaped string, possibly containing binary data.
and if you had used image file to upload then use “pg_unescape_bytea” function to convert into binary and display to browser as
A few days ago, I have to take my database dump which is in my local system and stored in MySQL, i tried from software tool phpMyAdmin but its take too much time and cannot succeed.
so i thought to restore my database by command line and use MySQL utility “mysqldump”.
I found its very easy and in less then a minutes its take all my databases dump.
To use it traverse to mysql bin directory.
to take dump there are various parameters with this command.
for a simple use type the following command
mysqldump -u root -p dbname > C:dbdump1.sql
-u is a database user, -p is for password, dbname is your database name, “>” this tells mysqldump command to restore given database to this path “C:dbdump1.sql”.
mysqldump -u[dbuser] -p[password] dbname > dump2.sql
Note: this give you an “Warning: Using a password on the command line interface can be insecure.”
All Database Dump:
if you want to take all of your database dump at on go, below is command
mysqldump --all-databases --single-transaction --user=root > dump3.sql
if you have set any password for your database then include “–password=dbpassword” to the above command, this is again insecure.
include –databases <db1> <db2>, this allow you to take your specific databases dump
mysqldump --databases db1 db2--single-transaction --user=root --password > dump4.sql
Dynamic table name in PostgreSQL trigger used as “TG_TABLE_NAME”
“TG_TABLE_NAME” is a name of the table that caused the trigger invocation.
Like If you want to copy data from one table to another on updating first table
Then start using triggers
Lets say you have table name “first_table”
Create same table as log table to store log of first table data.
Now using PostgreSQL to create functions and trigger:
Firstly, Create a trigger on first table ON UPDATE EACH Rows like
CREATE TRIGGER trigger_name AFTER DELETE OR UPDATE ON first_table FOR EACH ROW EXECUTE PROCEDURE updation_log()
Now Create Function To Execute This Trigger
BEGIN EXECUTE format('INSERT INTO %I values(($1).*)', TG_TABLE_NAME ||'log') USING OLD ; RETURN NEW; END;
format() is simply a string function like as used in C language sprintf().
This procedure explain that first table date is copied to second table defined as “TG_TABLE_NAME ||’log’ ”
if table name is first_table then data is copied to first_tablelog.