llama cpp gpu offloading not working