build llama cpp with gpu support