Following what's already written at How to use GROUP_CONCAT in a CONCAT in mysql...
Let's assume you have a schema and sample data equivalent to:
CREATE TABLE properties
(
id integer PRIMARY KEY,
property varchar(255) NOT NULL
) ;
CREATE TABLE recommended_providers
(
id integer PRIMARY KEY,
recommended_provider_name varchar(255) NOT NULL
) ;
CREATE TABLE properties_recommended_providers
(
prop_id INTEGER NOT NULL REFERENCES properties(id),
recommended_provider_id INTEGER NOT NULL REFERENCES providers(id),
PRIMARY KEY (prop_id, recommended_provider_id)
) ;
INSERT INTO properties
VALUES (1, 'property 1'), (2, 'property 2') ;
INSERT INTO recommended_providers
VALUES (1, 'provider 1'), (2, 'provider 2'), (3, 'provider 3');
INSERT INTO properties_recommended_providers
VALUES (1, 1), (1, 2), (2, 3) ;
You can either have:
SELECT
p.id AS property_id, property,
GROUP_CONCAT(recommended_provider_name SEPARATOR ', ') AS provider_names
FROM
properties p
LEFT JOIN properties_recommended_providers pr ON pr.prop_id = p.id
LEFT JOIN recommended_providers rp ON rp.id = pr.recommended_provider_id
GROUP BY
property_id
ORDER BY
property_id ;
(Check it at SQL Fiddle)
or
SELECT
p.id AS property_id, property,
(SELECT
GROUP_CONCAT(recommended_provider_name SEPARATOR ', ')
FROM
properties_recommended_providers pr
LEFT JOIN recommended_providers rp ON rp.id =
pr.recommended_provider_id
WHERE
pr.prop_id = p.id
ORDER BY
/* You can easily order by in this case */
recommended_provider_name
) AS provider_names
FROM
properties p
ORDER BY
property_id ;
You can check the second one at SQL Fiddle
I see "over-normalization" as the main problem.
The focus for this query seems to be on "language", yet that is the last thing checked.
For one attempt at a solution, let's rearrange the schema to these two tables:
CREATE TABLE restaurant_attributes (
restaurant_id INT UNSIGNED NOT NULL AUTO_INCREMENT,
rating INT NOT NULL,
PRIMARY KEY (restaurant_id),
);
CREATE TABLE restaurants_by_lang (
restaurant_id INT UNSIGNED NOT NULL,
language VARCHAR(5) NOT NULL CHARACTER SET ascii, -- see note
name VARCHAR(255) NOT NULL,
location VARCHAR(255) NOT NULL,
PRIMARY KEY(language, restaurant_id),
INDEX (language, location, name), -- perfect for the query
INDEX (name),
INDEX (location)
);
Now the query is simply:
SELECT location, name
FROM restaurants_by_lang
WHERE language = 'en'
ORDER BY location, name
LIMIT 0, 50;
This schema will even allow for efficient "pagination", which will be your next problem. (See here.)
I will suggest that if you have a million restaurants, listing the first 50 by location is a next-to-useless UI design. I recommend re-thinking the need for the query. For example, instead of paginating through the entire list, I suggest some form of drill-down such as country to state to city/area to restaurants. It will be a lot faster for the user to find Zahir's in Zimbabwe.
But... Since I can't see the rest of your SELECTs
, I don't know what other things I may have made worse for some of them.
I picked a 5-char language based on a standard for such. Do not use 255 unless you really need such.
Testing
When testing with less data than you will eventually have, this technique is often handy...
FLUSH STATUS;
SELECT ...;
SHOW SESSION STATUS LIKE 'Handler%';
Then look at the numbers. A number that approximates the number of rows in the table (or some multiple thereof) indicates a table scan(s). Some number that approximates the LIMIT
value indicates that the query could whittle down the task efficiently.
I claim you will see '50' with my approach, and at least 7K (20K?) from your approach.
Note: Page 4 (with LIMIT 150, 50
) would say 200 for mine; yours will not change. With the technique in my pagination link, even page 4 will be only 50.
Each temp table (due to GROUP BY
and/or ORDER BY
) will show Handler_write
indicating the number of rows (times the number of tmp tables).
Best Answer
You Simply have to do a Two
JOIN
s on the same TableJust like here http://sqlfiddle.com/#!9/9f703e/1/0
For the Summed one, You have to use
COLEACES
to calculate theNULL
values like the followingJust like here http://sqlfiddle.com/#!9/9f703e/3/0
In your next question I hope if you create the Table in SQLFiddle or dbfiddle and share it Or put the
CREATE
code so we can built the Table quickly to provide faster answers.