This inefficiency occurs when analysts use SELECT * (reading more columns than needed) and/or rely on LIMIT as a cost-control mechanism. In BigQuery, projecting excess columns increases the amount of data read and can materially raise query cost, particularly on wide tables and frequently-run queries. Separately, applying LIMIT to a query does not inherently reduce bytes processed for non-clustered tables; it mainly caps the result set returned. The “LIMIT saves cost” assumption is only sometimes true on clustered tables, where BigQuery may be able to stop scanning earlier once enough clustered blocks have been read.
BigQuery query cost is driven by the data read/processed by the query. Selecting unnecessary columns increases bytes processed. Using LIMIT typically reduces only the rows returned, not the data read—especially on non-clustered tables. On clustered tables, LIMIT can reduce bytes scanned in some cases because scanning may stop after enough blocks are read.