Posts

Gitlab change project visibility from private to internal

Apparently there is no clicable option in the GITlab portal to change the project visibility from private to internal or to public once that is set at the time of creation. In order to do this, you will have to add "/edit" string to the end of your repo URL and it will show the setting page. The "Permissions" section can be expanded to change the visibility and click "Save Changes" to get your changes reflected. Hope this helps.

The plugin net.alchim31.maven:scala-maven-plugin:3.2.0 requires Maven version 3.0.4

Image
[ERROR] Failed to execute goal net.alchim31.maven:scala-maven-plugin:3.2.0:compile (default) on project CPDS-PoC: The plugin net.alchim31.maven:scala-maven-plugin:3.2.0 requires Maven version 3.0.4 -> [Help 1] [ERROR] [ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch. [ERROR] Re-run Maven using the -X switch to enable full debug logging. [ERROR] [ERROR] For more information about the errors and possible solutions, please read the following articles: [ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/PluginIncompatibleException Fix: Please check the Maven Runtime setting under Run configuration in STS and select the correct maven version from dropdown:

Apache Solr index exception : Conflict or Bad request while importing the data into Solr Collection

This summary is not available. Please click here to view the post.

Convert HIVE table to AVRO format and export as AVRO file

Step 1: Create an new table using AVRO SERDE based off the original table in HIVE. You can do it in HUE data browser: CREATE TABLE avro_test_table ROW FORMAT SERDE 'org.apache.hadoop.hive.serde2.avro.AvroSerDe' STORED AS INPUTFORMAT 'org.apache.hadoop.hive.ql.io.avro.AvroContainerInputFormat' OUTPUTFORMAT 'org.apache.hadoop.hive.ql.io.avro.AvroContainerOutputFormat' TBLPROPERTIES (     'avro.schema.literal'='{       "namespace": "testnamespace.avro",       "name": "testavro",       "type": "record",       "fields": [ {"name":"strt_tstmp","type":"string"},{"name":"end_tstmp","type":"string"},{"name":"stts_cd","type":"int"}]     }'); This will create a new table in AVRO compatible format in HIVE. Step 2: Load data from the original table ...

Load data from CSV into HIVE table using HUE browser

It may be little tricky to load the data from a CSV file into a HIVE table. Here is a quick command that can be triggered from HUE editor. Steps: 1. Upload your CSV file that contains column data only (no headers) into use case directory or application directory in HDFS 2. Run the following command in the HIVE data broswer LOAD DATA  INPATH "/data/applications/appname/table_test_data/testdata.csv" OVERWRITE INTO TABLE testschema.tablename; 3. This will overwrite all the contents in the table with the data from csv file. so existing data in the table will be lost Make sure the table is already created in the HIVE. You can create the table as follows: CREATE TABLE   tablename( ·   strt_tstmp string , end_tstmp string , stts_cd int , ) ROW FORMAT DELIMITED FIELDS TERMINATED BY ',' ·   STORED AS TEXTFILE  

java.nio.file.NoSuchFileException: hdfs:/nameservice1/user HDFS Scala program

At the time of writing this, I could not find an effective native Scala API to copy and move the files. The most common recommendation was to use java.nio.* package. UPDATE : The java.nio.* approach may not work on HDFS always.  So found the following solution that works. Move files from one directory to another using org.apache.hadoop.fs.FileUtil.copy API val fs = FileSystem.get(new Configuration())         val conf = new org.apache.hadoop.conf.Configuration()         val srcFs = FileSystem.get(new org.apache.hadoop.conf.Configuration())         val dstFs = FileSystem.get(new org.apache.hadoop.conf.Configuration())         val dstPath = new org.apache.hadoop.fs.Path(DEST_FILE_DIR)         for (file <- fileList) {           // The 5th parameter indicates whether source should be deleted or not           FileUtil.co...

Oracle impdp command ORA-39145, ORA-31655, ORA-39154

While using the Oracle 11g impdp command there are few precursors that need to be taken care: 1. impdp is a OS command and hence need to be executed from command prompt and not SQL Plus. For using it within SQL Plus,you got to use the following command: host impdp 2. Prerequisite steps: a. First create the directory within SQL Plus : create or replace directory importTestDB as 'c:\importTestDB'; If there is an issue with this directory, you may receive the following error: ORA-39145: directory object parameter must be specified and non-null b. Grant the permissions to user for this directory: grant read,write on DIRECTORY importTestDB to user1; c. Grant the permission to import full database to the user:  grant imp_full_database to user1; Failing which you may receive the following error: ORA-31655: no data or metadata objects selected for job ORA-39154: Objects from foreign schemas have been removed from import 3. Execute the following comma...