FREE Data Engineering on Microsoft Azure: Design & Implement Data Storage Questions and Answers

0%

With Azure Data Lake Store Gen1, you are engaged. You now realize how important it is to understand the external data's schema. Which plug-in from the list below would you use to learn the external data schema?

Correct! Wrong!

When the external data schema is unknown, the plug-in infer_storage_schema aids in inferring the schema based on the contents of the external file.

You are setting up a storage account for Azure Data Lake Gen2. The availability of data must be guaranteed for both write and read activities, even if a whole data center (zonal or non-zonal) goes down. Which replication technique would you apply to the storage account? (Select the least expensive solution)

Correct! Wrong!

Zone-redundant storage synchronizes the replication of Azure Storage data across three Azure availability zones in the main region. If a zone is unavailable, the data is still accessible with zone-redundant storage for both write and read operations.

In Azure Data Explorer, you have established an external table with the name ExtTable. The KQL (Kusto Query Language) query must now be executed on this external table by a database user. Which of the subsequent functions ought he to use to make a reference to this table?

Correct! Wrong!

The function external table() should be used to refer to an external table after it has been defined. An external table can be queried by any database reader or user.

A history table is automatically created in the same database when you establish a temporal table in Azure SQL Database in order to store the historical records. Which of the following claims regarding the history table and temporal table is accurate?

Correct! Wrong!

A temporal table needs only one primary key. System Versioning must be turned on in order to construct a temporal table. The default naming scheme is applied to the history table if you don't supply a name for it.

You have been given the task of creating an enterprise data lake on Azure and carrying out big data analytics while working for a cloud company. Which Azure service from the list below would you choose in this situation?

Correct! Wrong!

Unstructured data can be stored and accessed at scale using block blobs with Azure blobs.
> When you want your apps to handle random-access and streaming scenarios, Azure blobs are the best choice. > When you need to be able to access application data from any location.
> When you need to build an enterprise data lake on Azure and do big data analytics.

Your current project is a Columnstore table. Even though columnstore tables and indexes are always saved using columnstore compression, you want to further reduce the size of the columnstore data. You choose to configure archival compression, an add-on compression, for this reason. Which of the following techniques would you employ in order to archive compression on the data?

Correct! Wrong!

Utilizing archive compression, COLUMNSTORE_ARCHIVE data compression is used to reduce the size of columnstore data.

In SQL Server Analysis Services, data semantic models must be created. One should adhere to a few best practices for data modeling that are advised. Which of the following procedures would you believe to be the ideal procedures to follow when developing data semantic models?

Correct! Wrong!

Even if you need to absorb data from many sources, you still need to design a dimension model snowflake or/and star. The best practices advise excluding all natural keys from the dimension tables and only including integer surrogate keys or value encoding. To minimize the uniqueness of the values and enable considerably greater compression, you should decrease the cardinality.

Premium Tests $49/mo
FREE November-2024