Published on

Crafting Web Cek Khodam

574 words3 min read51
Views
Authors
    website cek khodam x openai

    There is a booming trend called "Cek Khodam" on various social media. a trend where someone will be predicted about the Khodam they have based on their name. Khodam here means hidden guardian that someone has to protect from bad things.

    It does sound ridiculous, but this is entertainment for some people. some are willing to give money to get it as a good luck for the day.

    after following this trend several times many of the results of the predictions they gave sounded just made up. so what if the "prediction" comes from OpenAI suggestions. interesting isn't it?

    OpenAI is an artificial intelligence to help answer various problems from the LLMS model. OpenAI offers API integration that is accessed to be able to call the command prompt to generate suggestions for appropriate prompt result options.

    The prompt here can be set before we command it, because OpenAI provides a role as a "system" or machine that gets commands and a role "user" or user who gives commands.

    each command prompt used will be a token to get the suggestion result. for reference, for GPT-3.5 is given 5$/1m.

    Various ways can be done to save the use of tokens. such as we give the scope of the command as the role "system" before commanding as "user". so that the results can be narrowed down and specific as we want.

    import OpenAI from "openai";

    const openai = new OpenAI();

    async function main() { const completion = await openai.chat.completions.create({ messages: [{"role": "system", "content": "You are a helpful assistant."}, {"role": "user", "content": "Who won the world series in 2020?"}, {"role": "assistant", "content": "The Los Angeles Dodgers won the World Series in 2020."}, {"role": "user", "content": "Where was it played?"}], model: "gpt-3.5-turbo", });

    console.log(completion.choices[0]); } main();

    Another way can be done by setting the temperature or top_p which has a scale (0-2) where the lower the consistency of the results is better while the higher it will produce random suggestions.

    and the last way that is powerful but will affect the results, namely by setting max_token. so whatever token results come out will immediately stop according to max_token.

    import OpenAI from "openai";

    const openai = new OpenAI({

    apiKey: process.env.OPENAI_API_KEY,

    });

    const response = await openai.chat.completions.create({

    model: "gpt-3.5-turbo",

    messages: [

    {

    "role": "system",

    "content": "You will be provided with a product description and seed words, and your task is to generate product names."

    },

    {

    "role": "user",

    "content": "Product description: A home milkshake maker\n Seed words: fast, healthy, compact."

    }

    ],

    temperature: 0.8,

    max_tokens: 64,

    top_p: 1,

    });

    In making this khodam check website, I use 2 completion models (text generation) and image generators

    const first = await aiService('berikan satu contoh gabungan nama hewan dan kata sifat', name); const image = await aiServiceImage(`${firstResponse?.data?.data}`, name); const second = await aiService(`berikan penjelasan arti kata berikut ${firstResponse?.data?.data}`, name); const third = await aiService('berikan kata motivasi', name);

    in the next article, we will discuss best practices for OpenAI integration using rate limiting with Redis.

    if this article is useful don't forget to share and write comments. Stay tech-savvy and keep the conversation going. Cheers to a brighter, more connected future! πŸš€βœ¨