Posts

github add remote repo

git clone <<URL to git repository>> git config --global user.email <<your email ID>> git config --global user.name "<<your name>>" To see the files in a commit: git log git show e6f6a1e19b90d3e590a62bd08eea6e284d259995 --name-only git status //To add new files to repo: git add <<file name>> git commit -m <<"message">> <<file name>> git push

git push error : failed to push some refs to

>>git push To github.xxxx..git  ! [rejected]        master -> master (non-fast-forward) error: failed to push some refs to 'git@github.xxxxxx.git' hint: Updates were rejected because the tip of your current branch is behind hint: its remote counterpart. Integrate the remote changes (e.g. hint: 'git pull ...') before pushing again. hint: See the 'Note about fast-forwards' in 'git push --help' for details. >>git pull error: Your local changes to the following files would be overwritten by merge: xxx.file >>git push -u origin master To github.xxxx.git  ! [rejected]        master -> master (non-fast-forward) error: failed to push some refs to 'git@github.xxxx.git' hint: Updates were rejected because the tip of your current branch is behind hint: its remote counterpart. Integrate the remote changes (e.g. hint: 'git pull ...') before pushing again. hint: See the 'Note about fast-forwards' in

Gitlab change project visibility from private to internal

Apparently there is no clicable option in the GITlab portal to change the project visibility from private to internal or to public once that is set at the time of creation. In order to do this, you will have to add "/edit" string to the end of your repo URL and it will show the setting page. The "Permissions" section can be expanded to change the visibility and click "Save Changes" to get your changes reflected. Hope this helps.

The plugin net.alchim31.maven:scala-maven-plugin:3.2.0 requires Maven version 3.0.4

Image
[ERROR] Failed to execute goal net.alchim31.maven:scala-maven-plugin:3.2.0:compile (default) on project CPDS-PoC: The plugin net.alchim31.maven:scala-maven-plugin:3.2.0 requires Maven version 3.0.4 -> [Help 1] [ERROR] [ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch. [ERROR] Re-run Maven using the -X switch to enable full debug logging. [ERROR] [ERROR] For more information about the errors and possible solutions, please read the following articles: [ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/PluginIncompatibleException Fix: Please check the Maven Runtime setting under Run configuration in STS and select the correct maven version from dropdown:

Apache Solr index exception : Conflict or Bad request while importing the data into Solr Collection

This summary is not available. Please click here to view the post.

Convert HIVE table to AVRO format and export as AVRO file

Step 1: Create an new table using AVRO SERDE based off the original table in HIVE. You can do it in HUE data browser: CREATE TABLE avro_test_table ROW FORMAT SERDE 'org.apache.hadoop.hive.serde2.avro.AvroSerDe' STORED AS INPUTFORMAT 'org.apache.hadoop.hive.ql.io.avro.AvroContainerInputFormat' OUTPUTFORMAT 'org.apache.hadoop.hive.ql.io.avro.AvroContainerOutputFormat' TBLPROPERTIES (     'avro.schema.literal'='{       "namespace": "testnamespace.avro",       "name": "testavro",       "type": "record",       "fields": [ {"name":"strt_tstmp","type":"string"},{"name":"end_tstmp","type":"string"},{"name":"stts_cd","type":"int"}]     }'); This will create a new table in AVRO compatible format in HIVE. Step 2: Load data from the original table

Load data from CSV into HIVE table using HUE browser

It may be little tricky to load the data from a CSV file into a HIVE table. Here is a quick command that can be triggered from HUE editor. Steps: 1. Upload your CSV file that contains column data only (no headers) into use case directory or application directory in HDFS 2. Run the following command in the HIVE data broswer LOAD DATA  INPATH "/data/applications/appname/table_test_data/testdata.csv" OVERWRITE INTO TABLE testschema.tablename; 3. This will overwrite all the contents in the table with the data from csv file. so existing data in the table will be lost Make sure the table is already created in the HIVE. You can create the table as follows: CREATE TABLE   tablename( ·   strt_tstmp string , end_tstmp string , stts_cd int , ) ROW FORMAT DELIMITED FIELDS TERMINATED BY ',' ·   STORED AS TEXTFILE