Kaan Volkan has read over 10,000 technical pages on IG technologies and helped numerous organizations clean up file shares.

Now, he takes on the AI challenge and brings his expertise from Statistics and Software Engineering to solve Records related problems.

Records management in the era of AI - 10hr

Retention schedules, ROT Clean-Up, Redactions, AutoClassification. AI is here to change everything in our industry, and all of our careers.

According to a new study by Microsoft and LinkedIn, 66% of leaders say they wouldn’t hire someone without AI skills. 71% say they’d rather hire a less experienced candidate with AI skills than a more experienced candidate without them.

But how safe are these technologies? What about ethics and regulations? How can we upskill ourselves?

All your big questions answered in addition to hands-on-training to become a one man Records Army.

It is time to be indespensable in the eyes of senior leadership.

Topics

  • What is AI

  • Older AI and how they work

    • Search

    • Statistical Systems

    • Neural Networks

      • How do they work?

      • AI learns how to drive a car

      • Network depth

      • Network width

      • Why is AI so powerful?

      • AI is good, but can we make it better?

  • LLMS, RAG and Agentic AI

    • Why is language so hard to crack, and how did LLMs crack it?

      • Data Complexity

      • Markovian Chains

      • Cultural and neurological challenges

      • How LLMs work

    • Future of LLMs, RAG and Agentic AI

  • Prompting, prompt engineering, context engineering

    • Hands on Exercises

      • Get ready for a wedding

      • Plan my trip

      • Edit your resume with AI

      • How to write a cover letter using AI

    • AI Projects and auto-prompting

    • Hands on agentic AI

  • AI Risks

    • Will AI take my job?

    • Is AI making us dumber?

    • Terminator is real. Here is how you defeat it.

    • Data Explosion - Each time you edit a ppt with a prompt, that’s a new copy. And Microsoft will charge you for it.

    • Intrinsic AI Security

      • Prompt attacks

      • Robustness - Same Response Each Time - AntiPrompt Engineering

      • Hallucinations

      • Model Openness - From black box to white box - this way you can use to across more fields and make sure the thing works

    • CyberSecurity

      • Agentic AI + AI connected to all data sources. Agentic AI + and cyber criminals can find your insurance policy and ask the exact amount

    • Attacking the LLM - reverse inference and node selections to get data out - dedup before training - unlearn training - live monitoring

      • youtube.com/watch?time_continue=62&v=A_P_9mmTuGA&embeds_referring_euri=https%3A%2F%2Fwww.usenix.org%2F&source_ve_path=MTM5MTE3LDI4NjYzLDEzOTExNywyODY2Ng

      • Above is how to get the data out

    • Societal Problems

      • AI Bias - Comes from dataset - can be attacked by attacking vector codings and changing the numeric scores - or, prompt engineered. Causes Bias Propogation and increases bias even more.

      • Deepfakes

    • Regulations, Privacy and Ethics

      • Privacy concerns and how to overcome them

      • Regulatory response to AI and how to follow it

      • AI ethics

        • Fairness

        • explainability

        • Accountability

  • How to overcome AI Risks

  • How Records Management improves AI answers

  • Trash Data In, Trash AI Out

  • Use AI for Records Management

    • Let’s update our retention schedule

    • Can AI do redactions? If so, how risky is it?

    • Detect PII with AI

    • Versioning using AI

    • Let’s find records with AI

    • Let’s find ROT with AI

    • Is AutoClassification just a pipe dream, or is it just around the corner?

  • Is AI a risk for Records Management?

  • AI’s Effect on my career

  • AI Governance Framework

    • Accountability

      • AI

      • deployer

      • user

      • auditor

      • regulators

        • Do not be ambigious. Let only 1 party take the blame

        • Define the roles - do not be ambigious - AI tends to merge roles together

        • AI is a blackbox, do not keep it accountable

      • Audit Logs - Tracing the decision - Monitoring and incident report === 3 layers of compliance

      • • Accountability Regulations and Standards. Governments and industry groups are advancing legislation and standards for AI accountability. For example, the EU HighLevel Expert Group published the “Ethics Guidelines for Trustworthy AI”, identifying legality, ethical compliance, and technical robustness as core principles AI systems must meet, while emphasizing transparency and accountability. The European Commission’s 2021 draft AI Act seeks to define the obligations of developers and users of highrisk AI systems. Singapore’s AI governance framework also advocates for fairness, explainability, transparency, and human-centric practices across the AI life-cycle. • Independent Audits and Certification. Independent third-party auditing systems are key to ensuring AI accountability. They help expose issues in decision-making and supervise stakeholders’ behavior. Scholars have proposed institutions like the Independent Auditing of AI Systems to audit highly automated systems and foster responsible development. Policymakers can require highrisk AI systems to pass qualification assessments or obtain licenses before deployment. Such external oversight pressures developers and deployers to follow safety and ethical norms. Audit institutions must also be held accountable. Industry associations or authorities should regulate their credentials, and misconduct such as falsified reports or collusion with audited entities should be punished. Proper oversight ensures independence and credibility in AI audits, preventing a regulatory vacuum. • Legal Clarity and Liability Insurance. Legal frameworks must define the responsibilities of all stakeholders in the AI ecosystem to avoid blame-shifting. Without such clarity, disputes over responsibility are likely. Legal principles are needed to determine who is accountable for foreseeable and avoidable mistakes. Introducing liability insurance and compensation funds is another key strategy. Drawing from workplace injury compensation models, “no-fault compensation” systems can enable victims of AI-related harm to be compensated swiftly — without lengthy fault-finding procedures. This guarantees redress for victims and encourages developers and users to report problems and learn from them without fear of litigation. When combined with mandatory incident reporting and independent investigative institutions, a closed-loop system of accountability and continuous improvement can be formed.

  • More topics to come…

Start the Course