Hadoop users leverage tools such as MapReduce, Hive, HBase etc. for various data processing requirements. These tools do not share a common notion of storage formats, schemas, data models and data types. Apache HAWQ(Incubating) along with its extension framework (PXF) provides a high-performance massively-parallel SQL processing framework on unmanaged data stores/formats in the hadoop ecosystem. HCatalog provides a glue for the entire Hadoop ecosystem by providing a relational abstraction for HDFS data. This talk introduces the integration of Hcatalog metadata into HAWQ's in memory catalog, which provides a simple and seamless access paradigm to data managed by Hive.